Criminals aren’t slouches when it comes to technology – quite the contrary. Just because they chose a life of crime doesn’t make them inherently lazy. In fact, they’ve demonstrated a level of creativity and problem solving that has made digitally enabled crimes, such as scams, harder and harder to stop. And with the advent of consumer-ready artificial intelligence applications that can be used to create near perfect imitations of trusted people and organizations, scams are about to evolve to a point where unaided detection will be impossible.

Scammers aim to deceive people and obtain their money, personal information, or other digital assets, often by imitating a trusted party. Scams can take many forms, such as bank impersonation scams, investment scams, job scams, romance scams, tech support scams and so on. Up to now, eagle-eyed consumers could pick up clues that they are being confronted with a scam of one type or another. But that world is now behind us.

As technology advanced, scammers now have new ways to manipulate victims – specifically, the use of deepfakes and large language models, both of which can create realistic but fake content that is hard to distinguish from legitimate activity.

  • Deepfakes use artificial intelligence (AI) to manipulate images, videos, or audio of real people. They can make people say or do things that they never did or would do. For example, deepfakes can be used to create fake news, fake interviews, fake endorsements, or fake evidence. For reference, there are plenty of convincing examples that can now be found online of celebrities like Tom Cruise and Scarlett Johanssen, and politicians like Nancy Pelosi and Barack Obama.
  • Large language models (LLMs) are AI systems that can generate natural language text based on a given input or prompt. They can produce coherent and fluent text that can mimic the style, tone, or content of a specific domain, person, or genre. For example, LLMs can be used to create chatbots, fake reviews, fake posts, fake profiles, or fake emails. They can also be used to generate images and even video.

Scammers are already using these specific tools to great effect, as evidenced by their success with deepfake video and audio in romance and so-called ‘grandparent’ scams. And gone are the days when awkward spelling and grammar were telltales of a potential scam as LLMs are helping them craft error-free emails and text messages. But it’s the combination of deepfakes and LLMs that will become a gamechanger for scammers, as they will generate convincing and personalized scams that can target specific individuals or groups that are virtually indistinguishable from legitimate communications.

How it will work

The day will soon come when scammers can use LLMs to conduct research and generate everything needed to conduct an effective scam – whether that be text, websites, audio, or even video – and I’ve included example prompts to give you a sense of how low the bar to entry is being set for scammers. Commercially available LLMs may not fully process prompts like these due to privacy and security controls, unless they were jailbroken, but a tool like FraudGPT or WormGPT could – and as the ability of LLMs to generate convincing deepfakes mature, this is all-in-one approach is inevitable:

  • Identify targets using defined criteria (LLM prompt: list 100 senior executives in the cybersecurity space, including their name, location, email address, employer, job title, and social media profiles)
  • Identify third parties with whom potential victims have personal relationships (LLM prompt: list all known family members for cybersecurity executive John Doe and specify the type of relationship for each)
  • Research the relationship between the victim and a trusted third party (LLM prompt: identify themes from all available communications and social media posts involving John Doe and Jane Doe)
  • Solicit funds from the victim (LLM prompt: create a 30 second video of Jane Doe inside of an electronics store asking John Doe to purchase a laptop for their son from
  • Build a website to collect payment credentials (LLM prompt: create an e-commerce website for laptop computers called Fake Laptop Shop using HTML that includes an offer for three different gaming laptops on the home page with pictures of each, and which can accept credit or debit card payments)

The time to act is now

Therefore, there is an urgent need for innovative solutions that can help consumers and enterprises to combat the unified threat of deepfakes and LLMs. By combining the efforts of both sets of stakeholders, the opportunity to identify future scam threats could be maximized. Fortunately, financial institutions and other organizations can implement (or otherwise support the availability of) technology that is already on the market:

  • Consumer-facing indicators: These are cues that can help consumers to discern when something is a scam or not using watermarks, timestamps, labels, warnings, or ratings that can signal the source, authenticity, or reliability of the content. Consumers can also use tools or apps that can help them to verify or analyze the content, such as reverse image search, fact-checking websites, or deepfake detection software.
  • Behavioral biometrics: By analyzing patterns of human behavior, such as mouse movements, typing speed, or pressure on touchscreens, behavioral biometrics can be used to detect and prevent activity indicative of a manipulated accountholder or customer, as well as to detect mule activity – which is a crucial component for monetizing many scams – even if the mule is a willing participant.

Scams are already a costly problem for consumers and financial institutions, and they are set to become even more sophisticated and challenging, as criminals will use the combination of deepfakes and LLMs to create undetectable scams. All stakeholders need to be more aware and vigilant and collectively adopt a combination of innovative solutions that can help them to detect and prevent such scams.

This application of pioneering technology is inevitable, but malicious outcomes aren’t. Criminals have tipped their hand by demonstrating early interest in the application of these technologies, and we have the tools we need to get ahead of them before consumers are overwhelmed with scams that they couldn’t possibly detect on their own.

It’s a rare opportunity to preempt the threat.

Let’s not waste it.

Recent Posts