Our 2026 Digital Banking Fraud Trends in the UK report features a case study on recognizing deepfakes during account opening. Some of those deepfakes originated from a tool called Verifus. The following blog examines how criminals use Verifus to open new accounts.

A new customer opens a new bank account from their phone. They follow the online prompts and record a selfie video in which they blink as directed and hold a valid ID up to the camera. Everything looks normal. The bank confirms a successful identity check.

But the person onscreen isn’t real — or at least not who they claim to be. The video has been pre-recorded, then fed into the app as if it were happening live.

Digital identity verification is now central to how banks and fintechs onboard customers. Most systems rely on selfie videos and liveness checks to confirm a real person is present. The assumption is simple: If the camera is live, the user is physically there. But thanks to new AI tools, that assumption is starting to fall apart.

Attackers can now manipulate camera input, replacing live feeds with pre-recorded or fabricated video. What appears to be a legitimate user may actually be a forged video stream.

One tool gaining attention in these attacks is Verifus. Widely shared across Telegram channels and darknet forums, it overrides camera inputs on Android devices, replacing a live selfie with pre-recorded or streamed footage.

Tools like Verifus signal a shift in how identity fraud is carried out. Instead of traditional account takeover attacks, criminals are simulating legitimate users in real time. This shift is forcing banks and fintechs to rethink how identity is verified and which signals can still be trusted.

 

The dangers of Verifus

 

Verifus is rarely used on its own, and the wider setup makes it even more dangerous. It’s typically combined with tools like Open Broadcaster Software (OBS), virtual cameras, and local streaming servers. In more advanced cases, attackers add deepfake software to generate realistic facial movements in real time. Verifus acts as the bridge, feeding synthetic content into verification processes and making the sessions look legitimate.

A second layer of these attacks involves tools often referred to as “Verifus Keybox,” which are used to bypass device-level checks and make tampered or unsafe devices appear normal. Combined with the manipulated video, this allows attackers to bypass checks on both the user and the device.

Equally concerning is how accessible these tools have become. Verifus is promoted through Telegram bots, private groups, and online forums, often alongside escrow services and support channels. For a relatively small fee, users can join communities that offer not just the tool, but also templates, documents, and step-by-step methods for bypassing Know Your Customer (KYC) checks.

Together, these elements point to a mature fraud-as-a-service ecosystem, where tools, guidance, and support are packaged in a way that lowers the barrier for less experienced actors to carry out sophisticated attacks.

 

Why behavior matters more than digital identity snapshots

 

Notably, these attacks highlight a growing weakness in identity verification: an overreliance on static checks. Most systems still depend on what they can see at a single moment — a selfie, a video, or a device check. But that approach breaks down when those signals can be faked.

Behavior, by contrast, is much more resistant to imitation. The way a person types, navigates, or pauses follows natural patterns that are difficult to mimic convincingly. When a session is driven by injected video, remote control, or scripted actions, timing may feel too precise, interactions mechanical, and overall activity out of step with real users.

That’s part of why not every attack succeeds. Guides shared in online communities highlight failed attempts due to poor document quality, device anomalies, or network mismatches. In many cases, though, it’s the behavioral inconsistencies that ultimately reveal the fraud.

 

Key takeaways:

 

  • AI tools undermine digital onboarding’s core assumption that a live camera feed means a real customer is physically present.
  • Criminals can now inject pre-recorded, streamed, or synthetic video into selfie and liveness checks, making fraudulent identity verification attempts appear legitimate.
  • Fraud tooling is becoming commercialized through Telegram groups, bots, forums, support channels, and escrow services, lowering the barrier of entry for less experienced fraudsters.
  • Behavioral signals offer a solution to this problem, as user patterns like typing, navigation, hesitation, and interaction timing cannot be consistently faked.

 

Resources:

 


Recent Posts