• AceJohnny2 6 hours ago

    Thanks for this writeup. Whenever people complain about some service removing or making it harder to try out a free tier, I think they don't realize the amount of abuse that needs to be managed by the service providers.

    "Why do things suck?" Because parasites ruined it for the rest of us.

    > We have to accept a certain amount of abuse. It is a far better use of our time to use it improving Geocodio for legitimate users rather than trying to squash everyone who might create a handful of accounts

    Reminds me of Patrick McKenzie's "The optimal amount of fraud is non-zero" [1] (wrt banking systems)

    Also, your abuse-scoring system sounds a bit like Bayesian spam filtering, where you have a bunch of signals (Disposable Email, IP from Risky Source, Rate of signup...) that you correlate, no?

    [1] https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...

    • dehrmann 2 hours ago

      > "The optimal amount of fraud is non-zero" [1] (wrt banking systems)

      It's a bit like how each 9 of runtime is an order of magnitude (ish) more expensive to achieve, and most use cases don't care if it's 99.999% or 99.9999%.

    • caydenm 4 hours ago

      Free tier and free trial abuse is a huge problem, but also a huge opportunity.

      We have seen customers where free tier abusers created 80k+ accounts in a day and cost millions of dollars. We have also seen businesses, like Oddsjam add significant revenue by prompting abusers to pay.

      The phycology of abuse is also quite interesting, where even what appears to be serious abusers (think fake credit cards, new email accounts etc.) will refuse a discount and pay full price if they feel they 'got caught'

      • akerl_ 4 hours ago

        I’d love to hear more about the idea that somebody making a fraudulent signup with a stolen credit card is potentially going to pay full price if they “get caught”

        • caydenm 2 hours ago

          There are obviously people who are doing free trial abuse for commercial gain eg. Signing up 1k accounts to get test credit cards or to resell accounts. They are not going to convert (although sometimes you can successfully convert them into affiliates)

          We have seen individuals just trying to get free accounts week after week, who when nudged once pay immediately thousands of dollars even after using fake, stolen or empty cards.

          These individuals think they are being cheeky and when they are 'caught' they revert to doing the right thing.

          • WhitneyLand 26 minutes ago

            You got called out, responded, but didn’t really address the point. Looks like the original claim was overstated.

            • caydenm 13 minutes ago

              I was referring to generated or disposable card numbers rather than stolen. maybe that is the confusion?

              An concrete examples of converting a user using these types of cards for free trial abuse is a user who signed up 8 week in a row using different emails, names, IPs and cards. Nudging of these users was enabled and on trying to sign up for their 9th trial they immediately switched back to their original account and converted at full price.

          • TeMPOraL 3 hours ago

            I imagine an amateur who wants the problem to go away as quickly as possible and with minimum fuss, to the point of overcompensating from anxiety.

            • caydenm 2 hours ago

              100%! This was easy and now it is frustrating to get to the thing they want, the service, and the easiest route is to pay.

        • polishdude20 an hour ago

          How does an address API get it's info? Presumably addresses don't change often right? When they do, how does a service like this update it's records?

        • manmal 3 hours ago

          Apple‘s mail privacy protection creates disposable addresses with host icloud.com. It’s not as hassle free and can’t be automated, but this could definitely be used to create a lot of free accounts. But I don’t see them banning this domain I guess?

          • prteja11 7 hours ago

            I get why they don't want to share their detection mechanics for potential fraudulent signups, but that is a very interesting topic to learn and discuss.

            • thecodemonkey 6 hours ago

              I would love do a more in-depth talk about this at some point with some more concrete examples.

            • gwbas1c 5 hours ago

              Makes me wonder how easy / hard it is to turn this kind of feature into a standalone product?

              IE, send email, IP, browser agent, and perhaps a few other datapoints to a service, and then get a "fraudulent" rating?

              • the_bear 5 hours ago

                This is basically what Google's reCAPTCHA v3 does: https://developers.google.com/recaptcha/docs/v3

                The other versions of recaptcha show the annoying captchas, but v3 just monitors various signals and gives a score indicating the likelihood that it's a bot.

                We use this to reduce spam in some parts of our app, and I think there's an opportunity to make a better version, but it'd be tough for it to be better enough that people would pay for it since Google's solution is decent and free.

                • miki123211 3 hours ago

                  Also called DaaS, "discrimination as a service"

                  • pests 9 minutes ago

                    Not sure if this was a slight but yes, payment providers and other services need to discriminate valid uses of their service from fraudulent.

                • hn_user82179 5 hours ago

                  very cool, I wasn't expecting to find this so interesting. I yesterday for the first time thought about the "abuse the free tier" actors. I was trying to use a batching job service which limited free-tier batch sizes to 5, which was so low that it took away the point from using the automated job in the first place. I think the little info box explained that they keep the limit low to prevent abuse, and I started thinking about other ways they could prevent that abuse. Your post was very topical. thanks for sharing!

                  • oger 6 hours ago

                    Great writeup. Simple heuristics very often work wonders. The fraudsters are out there and try to pinch holes in your shield. Some time ago we were running a mobile service provider and had some issues with fraudulent postpaid subscribers - however the cost of using background checking services was substantial. We solved it quite effectively by turning the background checks on when the level of fraud went over a certain threshold which made them go away for some weeks. We kept this on and off pattern for a very long time with great success as it lowered the friction to sign up significantly when turned off…

                    • EGreg 5 hours ago

                      Where can we get a blocklist of those throwaway email domains?

                      or perhaps a really big whitelist of good ones? that would be extremely helpful!

                      • Etheryte 4 hours ago

                        Neither is a viable option, otherwise all the big players would've done this a long time ago. Nothing is stopping you from creating a throwaway account on Gmail while someone using a custom domain might be your new B2B lead. There's no realistic way to tell which it is simply from the domain.

                        • DecentShoes 3 hours ago

                          I think they were referring to actual throwaway email providers. Companies that specifically provide that as a service.

                        • pigeons 4 hours ago

                          I don't see how you could know what everyone's personal domain is to whitelist.

                        • AutistiCoder 7 hours ago

                          so you implemented some sort of machine learning?

                          • thecodemonkey 6 hours ago

                            Not at this time. Some simple heuristics go a long way and also makes it very easy to test and debug the logic.

                            • skissane 5 hours ago

                              I’ve seen fraud detection used in a SaaS product, and the great thing about a weighted rules approach, is professional services can understand it well enough to adjust it without help from engineering or data science, and they can explain to customers how it produced the results it did in a particular case, and the tradeoffs of adjusting the weights or thresholds, and the customers can understand it too. Whereas, a machine learning model, is much harder to understand and adjust, so issues are much more likely to be escalated back to engineering.

                              (This isn’t protecting the SaaS vendor against abusive signups, it is a feature of the SaaS product to help its customers detect fraud committed against themselves within the SaaS product’s scope.)

                              • gwbas1c 5 hours ago

                                I once did a machine learning project at Intel. The end result was that it was no better than simple statistics; but the statistics were easier to understand and explain.

                                I realized the machine learning project was a "solution in search of a problem," and left.

                                • lupusreal 5 hours ago

                                  Career hack: skip the machine learning and implement the simple statistics, then call it machine learning and refuse to explain it.