This is the third piece in an ongoing conversation between BioCatch Global Advisor Seth Ruden and BioCatch Threat Analyst Justin Hochmuth about how various fraud trends impact smaller financial institutions. You can find Part 1 on impersonation scams here and Part 2 on money mules here.

Seth: At the moment, I’m getting the sense there are increasing anxiety levels at many Main Street financial institutions (FIs). In my recent contact with institutions and their staffers, there seems a general feeling of unease surrounding the current AI-driven business cycle and the emerging threats we’ve seen. Bankers are increasingly concerned about the recent, novel, and diverse social engineering deepfake scams seen in Hong Kong, Texas (family scams), and on social media.

Justin, are you seeing anxiety and alarm in these areas as well? Do you also feel there’s a wave building, a sense that, without novel controls, these new threats we’re witnessing will prove a massive sophistication challenge smaller FIs will struggle to solve?

Justin: The anxiety is really being driven by the understanding and constant reminder that fraud and account takeover is not happening in isolation. Our online presence – whether on social media, online banking, e-commerce, or other online portals – is so incredibly interconnected. This interconnectivity brings accessibility and convenience but also compromises our digital security. Certainly, the methods of attack are increasingly diverse and technical, and fraudsters have a strong grasp of how to utilize AI tools to socially engineer their victims, but most of the conversations I have with FIs go back to setting up proper controls. Main Street financial institutions understand, in many cases, they cannot stop their customers from giving out credentials or a one-time password (OTP) to a bad actor, and they can’t stop a breach from happening outside of their portal. They can, however, install controls that protect their members. As much as there is anxiety about the sophistication of new attacks, there is also the resolve to utilize tools to meet these challenges.

Seth: Right. So currently the behavioral biometric defenses we implement for our clients are holding up well in this changing environment, but I wonder if we’re starting to see some real shifts in attack vectors? Is the anxiety around AI and deepfakes starting to materialize across the attack surface or is this just the usual FUD (fear, uncertainty and doubt)? Are there significant emerging threats you’ve noted in your day-to-day observations demonstrating banks are in fact under the new attack threat landscape yet? The reports we see in the news are sparce and iterative to the threats we saw in the last business cycle, so it’s evolutionary and not revolutionary in my estimation. What’s your gut say, are we seeing significant and novel threats to Main Street banks yet?

Justin: I would say attack vectors have broadened more than shifted. As much as we’ve seen a democratization in anti-fraud tools and anti-fraud technology, we’ve also witnessed an equal democratization in abusive technologies, malwares, tactics, and strategies for fraudsters. Fraudsters have become very adept at utilizing all the value contained within an online banking profile, not just the movement of money via transfers. This is truly the value of behavioral biometric controls. BioCatch machine learning (ML) and AI models can detect small behavioral differences between the fraudster’s behavior and the genuine user’s behavior, and that detection carries forward throughout the online banking journey, regardless of how access was attained.

Relative to FUD, there is valid concern over the way AI tools and deepfakes can make it easier for fraudsters to “spearfish” access. However, I agree, that while AI technologies should be seen as revolutionary, the fraud attacks we’re seeing day-to-day are evolutionary. I believe Main Street Banks should feel confident in the knowledge there are solutions and controls built specifically to face these complex problems in a changing fraud landscape.

Seth: So presently, we’re not yet in the storm, as it’s been presented to us. Perhaps the right thing to do in this situation is then to prepare, to set the roadmap and communicate the right posture ahead of the coming inevitable events. So, let’s do the thing we need to do, ahead of what needs to be done.

Tell me your thoughts on the best balance of education, awareness, and controls. Assume it’s your team, your mission, and our technology. You have to get the word out to customers/members and, of course, internally. What’s your game plan?

Justin: Preparation is the name of the game, and there are things a Main Street bank can do today to assist in facing these dynamic challenges.

The first task a Main Street Bank needs to accomplish is to really define what success looks like in their overall fraud strategy. You must create KPIs and measurables relative to fraud capture, alert rates, automation, friction, and member experience. A good fraud strategy is as successful at creating a positive member experience as it is at capturing fraud, which leads us to our second step: documentation.

In order to build strategies that limit false positives and retain a smooth member experience, you have to know what is and what isn’t fraud. It’s essential to track fraud cases across all channels, no matter where that feedback is coming from. If a member calls in to report fraud, that is getting tracked. If a tool you are using indicates fraud and you confirm it, that is also getting tracked. Everyone in the organization, from front-line employees and up, must be aligned on this documentation process.

Last but definitely not least, a Main Street bank needs to leverage its data to make informed, educated strategy decisions that align with the KPI’s defined in Step 1. At BioCatch, we leverage complex ML and AI models and score outputs and risk factors to identify the riskiest and safest behavior in online banking sessions. Combined with the data collected, these scores and attributes can be leveraged to create strategies that both target fraud and reduce friction/MFAs across all activities. Utilizing these types of outputs in your fraud strategies allows you to automate decisions and become proactive rather than reactive. It also frees up your people to focus on investigating new trends or problems that require more human intervention.

So, to recap: 1.) define success, 2.) track and organize the data, and then 3.) leverage that data to automate decisions.

Seth: Alright, let’s end with a provocative one: Do you think the AI boom is real? To refine this a bit, is the promise of AI and the accompanying elements going to be a real-game changer for fraudsters? Are they going to realize a new advantage that’s well above and beyond where they are today? Is there a tipping point where AI will be vastly more problematic? And if you think there is something to all this FUD, what are the go-to controls FI’s should seek in this space?

Justin: I think Bill Gates has a pretty good Netflix series out right now discussing some of the challenges relative to AI, GPT, etc., and there are a lot of people way smarter than me that are still grappling with this technology’s (many!) implications. AI is already a game-changer, in the world of fraud and beyond.

That said, there are certain things that machines have a really hard time replicating. Even the most advanced bots have very distinct behaviors that can be differentiated from the genuine user’s human behavior. Think of it this way: We have self-driving cars that are excellent, and they drive safely. But will the car drive itself the same way as its human owner does? BioCatch does a similar thing in the sense that we utilize AI and ML models to learn exactly how you drive your online banking experience. When we detect an anomaly in the navigation and operation of your digital banking session, we can flag and stop the bad actor behind it – whether they’re human or machine.

From a control perspective, I would always prefer to focus my energies on the things I can control. AI makes it easier for fraudsters to socially engineer OTPs and step-up authentication methods. Malware, keyloggers, and remote access tools will continue to proliferate, and fraudsters will continue to find new ways to attack. Certainly, member education is a crucial step, but we know it won’t always be 100% effective. Utilizing behavioral tools gives FIs an advantage because it allows them to stop the activity from occurring even if the fraudster eludes all other controls and gains access to a user’s account. At the same time, model outputs and scores allow you to not punish genuine activity, resulting in a much better user experience. It is important to be aware of the complexities and challenges relating to AI, but we should feel confident the tools exist for the good guys to eliminate current and emerging AI-powered threats.

Recent Posts