Over recent years, articles have appeared everywhere I turn about how generative AI like ChatGPT would revolutionize fraud and how, essentially, the fraudpocolypse was upon us. By this stage, I think it’s clear that hasn’t been the case.
I’ve already explained why generative AI isn’t magic, and in this article, I want to dig a little deeper into its limitations so far and the areas in which it does represent a significant upgrade for fraudsters. Understanding why and how fraudsters use LLMs today is vital knowledge for fraud fighters who want to be prepared for what the future brings.
Making Fraud LLMs is Harder Than You Think
Using an LLM is so easy that it’s not always intuitive to remember that the infrastructure behind the interface is anything but simple. You’d have to make one if you wanted a fully dedicated fraud LLM, though, because legitimate models are never tailored for fraud or criminal use cases and have some blocks built to prevent this abuse.
As I’ve written before and will elaborate on, there are many ways around these protections. But if you’re considering an LLM designed to help fraudsters, that’s a whole different thing. The kinds of data related to criminal activities that you might get from using darknet data as a part of the training set are well out of bounds for legitimate companies.
Building one to incorporate such data would require massive heavy lifting for a criminal group. Even the NSA couldn’t have developed LLMs, said Gilbert Herrera, director of research at the US National Security Agency, because “It really has to be people that have enough money for capital investment that is tens of billions and [who] have access to the kind of data that can produce these emergent properties.”
The fraud ecosystem is impressive, but it doesn’t have that kind of scope available. The investment, time, work and expense would be prodigious, particularly when considering further investment in running and maintenance.
Adapting, Not Evolving
The difficulty and expense required to create a fraud-focused LLM from scratch are an essential part of why the predictions of doom from a year or so ago haven’t come true. On the other hand, generative AI has undoubtedly come into the world of fraud and is not something any fraud fighter can afford to ignore.
There’s an abundance of what I would call overlays placed on top of existing legitimate LLMs. It’s like adding a layer on top of something like ChatGPT to short-circuit or avoid their anti-crime protections or speed up specific processes. The process I went through persuading an LLM to give me fake data is something I did manually but can be baked into an overlay to speed up and streamline the process. Overlays can also offer additional anonymity for the fraudulent user.
Some of these overlays have been offered as separate creations. Still, every time I’ve investigated, it turns out to be simply a veil placed over an existing, legitimate, generative AI model. That makes it sound innocuous, but I’ve been intrigued by what you can do with that kind of simple cheat.
Deep Dive Into Video Deepfake
Look at these videos, for example. Yes, they’re all of me. Hi! 👋 You may remember me from hits like Finding a Fraud Fighter Gig, 5 Scams in 5 Minutes, Live from MRC with Alexander Hall, and so on. Thank you, thank you, I’ll be here all week.
The point, though, is that making those What the Fraud? videos took time, thought and fine-tuning. They’re fun, and I love doing it, but they’re work. Making the videos I’m about to share, not so much.
- Here’s me asking for money — this is an actual video I made myself. It’s the base video, if you like, similar to videos used as the core of many scam attempts.
- Here’s me again, this time asking for money in French. No, I don’t speak French. But with an LLM overlay, there’s nothing easier than to turn my original video into a different language.
- Here’s me once more, and this video is fake. A deepfake, if you like. Based on the original video, I can’t tell you how easy it was to make.
Let’s Look at the Use Cases
Once you see how easy it is to generate the kinds of videos I’ve just shared, you can appreciate how valuable this technology is for all sorts of scams. Bear in mind also how many videos many people publish publicly of themselves on social media and the like. Getting the material for a convincing deepfake, at a minimum, using someone’s voice, is often trivial.
Using an LLM to automate much of an online chat is even simpler. When you consider romance scams, scams pretending that a friend or loved one needs money straight away, ticketing scams for popular sporting events or concerts, phishing scams, harvesting personal data, and so on, you can see straight away how much faster and easier it all becomes for a fraudster.
As I’ve already demonstrated, fake data creation can follow exactly the same process. Overlays simplify the process, streamlining the steps I had to take to convince the LLM to help me without realizing that I was preparing the groundwork for fraud.
Moreover, as we’ve discussed in the past, it can speed up malware creation, including for people without real technical expertise, and also help spread malware more widely and easily.
To quote myself, while AI isn’t a good fraud instigator — it is a great fraud accelerator.
Don’t Underestimate the Overlays
Ultimately, this is about ROI. Fraudsters want to steal as much money as possible with as little effort as possible. Is it worth creating a new LLM dedicated to serving a fraud use case? No, not when the existing LLMs do a great job with some tweaking.
This explains why the original doomful predictions haven’t come to pass. Fraudsters are not incentivized to invest in creating something new and game-changing because the investment involved would be significant and might not work the way they’d like. On the other hand, minimal effort piggybacking off existing models gives them a real boost in streamlining and scaling their criminal tricks.
On the one hand, it’s a relief that fraudsters are happy to keep leaching off the efforts of others to get a leg up in the ongoing arms race between them and fraud fighters. There’s no revolution yet. On the other hand, it’s sobering to realize that the reason for that is that, for now, they already have everything they want.
Doriel Abrahams is the Principal Technologist at Forter, where he monitors emerging trends in the fight against fraudsters, including new fraud rings, attacker MOs, rising technologies, etc. His mission is to provide digital commerce leaders with the latest risk intel so they can adapt and get ahead of what’s to come.