It’s 6 a.m. You’re in an Uber on your way to the airport. You need to complete an urgent money transfer. You open your banking app, hoping to get it done before the car reaches the terminal and you have to start sprinting toward security.

Facial recognition fails.

You try a second time. Still no luck.

A third attempt — and then the dreaded message: Too many attempts. Try again in one minute.

The car pulls up. Boarding starts soon. Your flight won’t wait. Now what?

If you work in fraud prevention or customer experience at a bank, you’ve likely heard some version of this story countless times. It highlights a dilemma few want to admit: We’re caught between technical efficacy and human reality.

This type of scenario doesn't just convey a moment of customer frustration. It’s a symptom of a deeper tension in modern banking. As institutions race to build smarter, more secure systems, they often underestimate the real-world friction those safeguards create. In this piece, we explore how facial recognition is testing the balance between fraud prevention and user experience, and explore alternative approaches that protect customers without getting in their way. 

Facial biometrics: Fraud hero, user experience villain 

Facial biometrics is undeniably a super ally against account takeover (ATO) fraud. Some studies show it can reduce ATO by as much as 85%. The technology is largely effective at blocking bots and automated attacks and creates a real barrier against criminals using stolen credentials.

So, what’s the problem?

The problem is us. Facial biometrics was designed for a perfect world where we always have good lighting and are never in a hurry. Spoiler alert: That world doesn’t exist.

In reality, user experience studies consistently show high levels of frustration with facial recognition failures. Multiple layers of authentication often lead to increased transaction abandonment, while customer service teams frequently field complaints tied to facial recognition issues. Everyday variables, like masks, glasses, or poor lighting, drive up error rates and compound user dissatisfaction.

Fraud finds a way 

Another problem? Fraudsters don’t retire when facial biometrics gets implemented. They simply change their strategies.

Between 2023 and 2024, Brazilian banks implementing facial recognition recorded an 85% reduction in ATO fraud. But over the same period, FEBRABAN reported alarming growth in social engineering scams.

Attackers once relied primarily on automated scripts, but as facial biometric security has improved, they’ve evolved their tactics. Now, criminals increasingly turn to social engineering, impersonating bank representatives, cloning WhatsApp accounts, and leveraging emotional scams in a variety of creative forms.

Criminals are nothing if not endlessly innovative. Like water, fraud always finds the easiest path.

Evolving the modern fraud toolkit 

Fortunately, there’s good news. Organizations can retain the benefits of facial biometrics in preventing account takeover, apply them more strategically, and detect evolving fraud patterns — all without disrupting the user experience.

The next wave of modern defense is already here: behavioral intelligence, telemetry, geolocation, and device fingerprinting.

Behavioral intelligence analyzes how a user interacts with their device, such as typing pace, swipe patterns, response delays, mouse movements, and signs of device compromise, including malware or remote access. It can detect telltale signs of social engineering, such as the “guided hand” behavior often seen in scam scenarios.

BioCatch's behavioral intelligence solutions, for example, analyze more than 3,000 behavioral and device-related data points in real time to build a nuanced, continuously evolving risk profile for each user.

Telemetry, geolocation, and device fingerprinting provide another layer of intelligence. Every device has a unique digital DNA based on its configuration, movement patterns, and contextual signals. These tools can detect anomalies like VPN or proxy use, suspicious apps, or even whether a device is on an active call — all common signals in remote scam scenarios.

Preventing fraud, and false positives, in real time 

With these solutions in place, banks can distinguish between legitimate and fraudulent activity in real time.

For example, if a customer gets a call from someone posing as a bank representative and is pressured to make an urgent transfer, behavioral intelligence can quietly run in the background and flag signs of risk, like hesitant typing, a payment to a new beneficiary, or activity at an unusual time.

With BioCatch, our technology instantly assigns a risk score to the transaction — in this case, 920 out of 1,000, indicating a high likelihood of fraud. The bank would automatically block the transfer, and the customer would receive a real-time notification warning them of potential fraud.

By contrast, a legitimate transaction from the customer’s usual iPhone, connected to their home Wi-Fi, might score just 50 out of 1,000, signaling low risk and allowing the transaction to proceed without added friction.

Success cases: Turning theory into practice 

Let’s look at how this is currently playing out in the real world.

One UK bank using BioCatch solutions maintained 95% effectiveness against ATO, significantly reduced successful social engineering scams, decreased Facial recognition failures for legitimate customers, and achieved a 400% return on investment in the first year of deployment.

Meanwhile, in Brazil, another bank deploying BioCatch solutions saw similar success. After initially implementing facial recognition, the bank reduced ATO by 89% but soon after faced a surge in WhatsApp and voice scams. Once BioCatch was fully deployed, the bank maintained 97% effectiveness in identifying and stopping ATO fraud within the first eight months, significantly reduced successful social engineering scams, and detected most cases where customers were acting under pressure from fraudsters. It also increased its Net Promoter Score (NPS) by 38 points and achieved substantial ROI through fraud loss reduction and improved customer experience.

These real-life case studies don’t even factor in additional operational cost savings.

The future is invisible, and it's already here 

The beauty of the behavioral intelligence approach is that the customer sees nothing. For them, the bank simply “works better” and protects them from scams they didn’t even know existed. Meanwhile, behind the scenes, a digital defense system works around the clock to ensure only legitimate users can access their accounts.

It’s an invisible layer of protection, one that can complement or even replace visible tools like facial recognition. Because, while facial recognition remains a powerful tool today, fraudsters continuing to adapt and innovate is a guarantee. The future lies in building an ecosystem that not only stops attacks but also prevents fraudsters from shifting their tactics elsewhere.

So, next time facial recognition fails you in a dark Uber at 6 a.m., you can at least take comfort in knowing there’s a better way.

Espanol
Português 

 

Recent Posts