On May 14, Google announced they’d be rolling out AI-generated answers to supplement their search engine answers. AI Overviews duly launched, and within days, as Wired beautifully summed it up, “the feature was widely mocked for producing wrong and sometimes bonkers answers, like recommendations to eat rocks or make pizza with glue.” Google published a blog post admitting the issues and paused the rollout.
I was as entertained, gobsmacked and horrified as anyone by what one of the world’s largest companies had managed with this powerful and significant new technology, sacrificing accuracy and responsibility in the race to be first.
What really struck me, though, was how much of the conversation afterward revolved around the need for precision and safety. Because there’s one group of folks I know quite well who couldn’t care less about precision or safety. And while the need to do better than AI Insights and Microsoft’s AI Recall will (hopefully) cause legitimate companies to be careful about fresh uses of AI, for fraudsters, good enough is more than good enough.
Accuracy is Ultimately About ROI
AI Insights was a PR trainwreck for Google, but this obscures the fact that the main reason for caring about accuracy is that it’s results-oriented. This means the point of the exercise isn’t the specific thing you get from using AI but rather what the thing can do for you, your goals and your business. It’s a step in a process, not its own end. I’ll give just a few examples to show what I mean.
- AI in chatbots to streamline customer service: It could be great, but equally, if the bot isn’t good enough, it could lead to binding promises being made over service or returns that the company never intended and which are expensive to follow through on.
- GenAI to streamline logistics: If you get it wrong, you could be spending valuable resources sending the wrong things to the wrong places and not having enough of the right ones in the right places.
- GenAI analyzing raw materials or finished products: If it’s accurate, that could be a great way to ensure that things are what they should be. If not, it could be hugely costly to fix.
- Conversational shopping (AI providing advice and making purchases easy, even possibly via voice assistant): Incredibly convenient — as long as it’s not making mistakes about what a customer wants, is saying, or what they actually want to buy in the end. Fixing mistakes could be pricey and cause long-term future custom loss as well.
If GenAI isn’t accurate enough, it’s not worth the risk in these sorts of use cases, meaning it’s not actionable technology for many of the things it’s being discussed. The bar for accuracy is high.
On the flip side, there are some consumer-facing use cases, often things we think of as “just for fun,” where accuracy is much less important because you’re not focused on the outcome as in the cases we’ve just been describing but rather on the actual GenAI output. Fun conversations with historical characters, images for illustrative purposes or memes, perhaps digital goods to use in gaming. Here, good enough is good enough.
People are usually willing to go many rounds in these use cases to get something close to what they want and aren’t generally too precise about the desired outcome. Here’s the thing: none of these “good enoughs” are game-changing for any industry. They’re nice to have. There’s typically little or no money on the table, and accuracy isn’t an issue because there simply isn’t much riding on the output.
Fraud, as is so often true, is different.
Fraudsters Can Accept a Low Bar
With online fraud, there’s plenty of money on the table. In the US, more than $12.5 billion was lost in 2023 to reported online fraud. Also, last year, the global retail sector lost $429 billion to payments fraud. Fraudsters have every incentive to stick to their dirty trade.
Fraud is unusual because it’s results-oriented, like the business use cases I mentioned above. The point is the money that can be stolen, not whatever individual thing AI can generate. AI is just a tool; using it is just one more step in a fraudster’s journey toward successfully monetizing an attack.
On the other hand, accuracy is not crucial. Fraudsters are focused on ROI. So the question is, does GenAI speed things up enough to make it worth using, even if it’s inaccurate? So far, the indications are that it is. Even if I think only about things I’ve managed myself using GenAI, it’s evident that these benefits are lucrative enough at scale to be well worth the investment.
- Speeding up the creation of fake data, as I did here. It took time, but it was still relatively quick and easy.
- Recreating an existing video in a different language for use in social engineering, as I did here. It was speedy and astonishingly simple, potentially opening up new linguistic geographies for attack.
- Creating a deepfake, as I did here, which can be used in a wide variety of attacks. Again, it is very fast, very easy, and very adaptable.
With all of these, it’s okay if it’s not perfect. It doesn’t need to work every time; it just needs to work often enough. From the regrettable success of scamming and phishing, we know that it does. Low accuracy is fine because it can still result in a high reward.
Fraud Fighters Need to Track These Trends
For now, this plays out mostly in the world of social engineering and phishing. That is a part of the online fraud world, not least because it results in more stolen account data becoming available. There’s a knock-on effect there, as well, because more supply usually means lower prices. That, in turn, means identities are cheaper to purchase, and so perfecting an attack (which usually requires burning through some identities as tests) becomes more affordable. However, for now, the effect is limited.
Fraud fighters must be aware of and keep tracking these trends because the limitations might become much less restrictive over time. The more GenAI evolves, the more likely it is that fraudsters without technical expertise can use no-code or low-code interfaces to streamline their attacks in numerous ways. (This makes it cheaper from the get-go because they no longer need to pay a coder to script for them.) For example:
- If phishing and social engineering become effective enough at a great enough scale, account takeover attempts could skyrocket.
- If these types of account information can be stolen together with more information about a user’s digital footprint (browser, device, settings, etc.), identifying account takeovers could become more challenging even for teams currently comfortably on top of this problem.
- Writing programs to automate flipping through identities and matching proxies as part of an attack could become accessible to anyone, including fraudsters without software engineering abilities.
- Automating web browsing activity could become equally easy and widely accessible.
Again, fraudsters won’t need a high level of accuracy. They’ll just need it to be good enough. If fraud prevention teams aren’t prepared, an attack might not need much to be good enough.
Doriel Abrahams is the Principal Technologist at Forter, where he monitors emerging trends in the fight against fraudsters, including new fraud rings, attacker MOs, rising technologies, etc. His mission is to provide digital commerce leaders with the latest risk intel so they can adapt and get ahead of what’s to come.