The vibrant, diverse online criminal ecosystem, with its marketplaces, forums, apps and services, sometimes feels like a true shadow internet to me when I explore its alleyways. It feels like you can buy anything for a reasonable price if you look hard enough (I hate to tell you this, but you pretty much can).
Sometimes, that can feel overwhelming if you’re on the side of the angels, but recently, I’ve realized that it’s also a weakness on the criminal side now that generative AI is on the scene.
Handy, Dandy Fraud-as-a-Service
The online criminal ecosystem is divided into areas of specialization. Some criminals are experts at creating scripts for scam calls or chats, for example, while others have extensive experience creating automated scripts to speed up the flow of steps needed for an attack on a digital commerce site.
Others might be great at creating malware to order, while others still centralize and organize stolen identity data or enter hacked accounts to determine their value so that other fraudsters can make informed purchase decisions about which hacked accounts they want to buy access to. Still, others provide escrow services to help hedge the fact that criminals (rightly) don’t trust each other. And so on.
This is a considerable advantage for criminals, giving them access to knowledge, skills and applications far beyond any one fraudster’s scope. It means that the ecosystem has evolved to a high level of sophistication because those who focus on specific areas can get really good at them – a benefit that other crooks can leverage.
This setup has been in place for over a decade and has only increased in scale over time. I’ve been realizing lately, though, that it’s not playing out as an advantage when it comes to generative AI in the way that I might have expected it to.
Division of Labor Avoids Serendipitous Success
Intuitively, I would have expected the sophistication of the fraud-as-a-service ecosystem to become a fraud fighter’s nightmare once generative AI leaped onto the scene. As I’ve written elsewhere, there are so many ways in which LLMs increase the speed and scale of existing fraud operations.
But because of the divide-and-conquer approach that Fraud-as-a-Service has built into it, this isn’t quite how it’s played out.
Let’s take an attack on an e-commerce site as an example. Fraudsters have to scope out new target sites and go through the purchase flow bit by bit, examining for vulnerabilities and discovering any traps left in place to catch them, such as velocity checks, IP blocks, scrolling or browsing speed evaluation, etc. They need to work out how to avoid any such traps.
They test their attack by burning identities until they reach a successful checkout and know they’ve found a safe path to cashing out with their attack. Only once they’ve reached this point do they contact a criminal coder to get them to automate the path they’ve found so they can use it at scale and speed.
Once they have the automated script, they have only a limited number of runs until the fraud systems catch it—though how limited depends on the system in place, and they don’t know in advance what the limit will be. Once they’ve maxed out that trick, they have to start all over again, attempting new variations, which will be taken to a coder to automate when ready.
Adding a little extra wrinkle to this process, criminal coders often intentionally separate themselves from any contact with actual stolen information, refusing to load it into their scripts or be part of doing so. It’s a form of personal protection or deniability if you like. So that’s an extra step for the fraudster.
Generative AI Isn’t Bridging the Gap
I had assumed that generative AI would be a huge leap in streamlining this process, finding new vulnerabilities like lightning and exploiting them the same day or even faster. But that’s not the case. First, GenAI struggles to exploit vulnerabilities without guidance, and second, the preservation of specialization between different areas of the fraud arena means that the relevant knowledge and incentives are not combined.
Coders are not incentivized to get creative about finding new areas of exploitation or finding ways to make GenAI better at doing it because that’s not part of their current process. They build to order; they don’t hunt. Those who do get creative about finding exploits, on the other hand, generally aren’t the ones who are used to working on automation.
They may use it to streamline their parts of the process, like making fake data or finding IP addresses that match the billing address of a set of credit cards they have, but while this speeds things up a bit, it doesn’t remove the main effort of the process described above.
Exception: Phishing
There’s an important exception here that I want to flag, though I won’t dive into it in detail in this article. Phishing is not protected by the division of labor I’ve discussed here. All you need to leverage GenAI to upgrade your phishing attacks is the basic knowledge of how to use the highly intuitive LLMs that we all already know and understand.
The videos I’ve mentioned before are examples of how true this is. It was shocking how easy it was to get a version of my video in French, which I don’t speak. I could have done that in Spanish, German, Swedish, you name it, just as easily. Many fraudsters are doing just that. This means that the audience they can attack is suddenly global.
There are deepfake voice messages, the kind that were used to steal $25 million recently and can be employed in diverse professional and personal scam settings. There’s also the format of chat messages, which are just as vulnerable in increasing the reach and extent of scam successes. Tricking people into handing over their personal information can be as easy now as 1, 2, 3.
Watch This Space…
This is the current state of play. Other than within the phishing and scamming sphere, the powerful Fraud-as-a-Service model has, funnily enough, acted as a kind of delaying factor in the impact of generative AI on the fraud world more broadly.
If low-code/no-code platforms become ubiquitous within the GenAI coding scene, all this could change within days. At least for now, though, it’s nice to know that there’s an unexpected advantage on the fraud-fighting side in this area. Being an analyst, I find it satisfying to understand some of the reasons for this.
As a fraud analyst, though, I admit I have a small paranoid warning voice in the back of my mind. There’s no way to tell where all this will be in another six months, much less a year. I’ll be watching this space very closely.
Doriel Abrahams is the Principal Technologist at Forter, where he monitors emerging trends in the fight against fraudsters, including new fraud rings, attacker MOs, rising technologies, etc. His mission is to provide digital commerce leaders with the latest risk intel so they can adapt and get ahead of what’s to come.