It’s been about two years since the launch of ChatGPT, and we’ve since seen a steady stream of associated threat forecasts from all strata of the security community. OpenAI has even offered up a few internal insights into this GenAI-threat-landscape in large part created by its do-everything AI chatbot.
So far, so what?
Well, OpenAI's most recent update focuses a lot of attention on the financial crimes space (it also offers a deeper reading into the planting of articles and tweets in mainstream media – pretty scintillating stuff if you want a deep dive), and serves a hearty helping of open-source intelligence (OSINT) on how great a threat GenAI really poses.
Predictions become reality
Last year, BioCatch published a whitepaper on the most probable impact AI will have on fraud and scams. Here we are now, with that paper’s theses in our rearview, and we’re able to validate much of its thinking and prognostications.
Here are a series of observations noted in OpenAI’s latest edition of “Disrupting malicious uses of our models:”
“We identified and banned a set of ChatGPT accounts whose activity appeared connected to an operation originating in Cambodia. These accounts were using our models to translate and generate comments for a romance baiting (or “pig-butchering”) network across social media and communication platforms.”
In our whitepaper, we hypothesized scammers would use this technology to improve the believability of their outreach, thereby also increasing the efficiency of their attacks. Those expectations have been fully realized.
The other findings that stand out pertain to automation. GenAI tools now not only gather intelligence on potential targets, but they also write, implement, and debug the code bad actors use to create malware. Again, from OpenAI’s February update:
They were using our models to assist with analyzing documents, generating sales pitches and descriptions of tools for monitoring social media activity powered by non-OpenAI models, editing and debugging code for those tools, and researching political actors and topics.
The actors used our models for coding assistance and debugging, along with researching
security-related open-source code. This included debugging and development assistance for publicly available tools and code that could be used for Remote Desktop Protocol (RDP) brute force attacks, as well as assistance on the use of open-source Remote Administration Tools (RAT).
OpenAI also notes how bad actors have used its tools to craft compelling employment scams, recruiting victims the scammer then fleeces of the funds invested in the fake employment process or, potentially, utilizes as money mules to launder illicit funds.
This activity appeared to originate in Cambodia. In these scams, victims generally lose both the “earnings” and their own money.
The various accounts used our models to generate content seemingly targeting each step of the recruitment process with different deceptive practices, all designed to be mutually supporting.
At least they’re watching
The upside is that the developers of AI have been active in their stewardship of it (yeah, Roko’s Basilisk and all that noise might have you doubting this, but this latest update provides at least some evidence to the contrary), have developed necessary monitoring controls, and are providing these insights to the community in an attempt to inform us of the evolution of abuses of AI. So, these are fairly early days yet, and the attacks described above are more primitive than we might’ve expected (and should expect going forward), but the theme remains constant: AI is a gamechanger for bad actors and enables faster time-to-live attacks. The threat landscape is becoming more complex and sophisticated and the need to be able to discriminate between legitimate and imitated behavior is now more crucial than ever and growing still more essential by the day.
A “new normal” realized
OpenAI’s “Disrupting malicious uses of our models” validates the industry’s concerns of an incoming new normal, in which scammers harness GenAI tools to improve and scale their efforts to lure victims into actively and unwittingly participating in scams and money laundering. That new normal has arrived and it’s not going away.
But, dear reader, don’t be discouraged. We’ve been just as busy as the bad guys, developing, refining, and deploying the tools necessary to disrupt these criminal networks and keep your money safe.
Our own models reveal new methods to identify and disrupt fraud, scams, and financial crime, enabling our customers with new capabilities. So, while the world grow more opaque, know that there are solutions that allow the good guys to fight the fire.
One last item of note: The author used no AI in the writing of this blog, so rest assured that your trust in the legitimacy of these words is well-founded …
… or is it?