Artificial intelligence (AI) has been a crucial tool for financial institutions (FIs) in the fight against fraud for the last thirty years. Once the purview of card networks and payment processors decisioning transactions, AI is now delivering value in detecting malicious transactions of all types across a multitude of different organizations and institutions. Yet as highlighted in a recent study by the U.S. Treasury, not all organizations are equal in their ability to deploy and leverage AI.

This gap in capabilities makes it more difficult to cost-effectively detect and prevent fraud for smaller FIs as criminals move downstream. Subsequently, these FIs are forced to turn to vendors that have significant volumes of data from across their client base, even if it isn’t exactly right for their institution.

With the advent of new AI tools that are just starting to be leveraged by bad actors, this disparity in capability will expose smaller FIs to an even more lopsided degree of risk – which is driving government and vendors alike to push for AI to be better utilized by FIs of all sizes. Worst of all is that the consequences of this threat will extend to not only fraud, but also financial crime creating an outsized risk of fraud losses and fines for the institutions that can least afford it.

The Haves and the Have Nots

According to a recent study by BioCatch, 73% of FIs globally use AI for fraud detection. But small FIs have a chicken and egg problem as AI is only as effective as the data used to train it, and as such, smaller institutions simply have less to work with. And with less data than their peers, the imperative to prioritize investments in the internal development of AI is simply less than it would be otherwise.

This in turn drives smaller FIs to rely much more on third party providers to apply AI to detect fraud and financial crime as both are on the rise. In some ways, this gap – let’s call it the AI Data Gap – is similar to the wealth-gap in that lower income consumers are forced to turn to more expensive credit options than affluent consumers who have access to better terms by virtue of their wealth. This dynamic shows no sign of changing as about half of FIs expect fraud and financial crime to increase relative to 2023, inevitably resulting in many smaller FIs directing an increasing amount of their budget to third party AI companies.

Adversarial AI Makes Bankers Sweat

One of the largest benefits of AI for an FI is the ability to detect activity that would otherwise be missed by a human being. It is this fact that makes the level of interest that criminals have displayed in new AI tools, such as generative AI, so disconcerting. These tools have demonstrated an immense potential for greatly improving the quality and quantity of malicious activities, a fact not lost on bankers.

Fraud and financial crime professionals recognize that not only will AI contribute to activities that increase the rate of fraud, but also the more difficult challenge of scams:

  • 45% expect scam tactics to become more automated
  • 42% expect AI to be used to locate more customer PII
  • 36% expect that scam messages will become more convincing

This doesn’t include other threats, such as the use of deepfake tools to create images, voices, or videos for use in bypassing identity and authentication controls. Artificial intelligence is a force multiplier across the board for criminals – meaning that the volume of all types of attacks will increase.

In the face of growing AI adoption by criminals, smaller FIs will suffer the ironic indignity of being far less likely to have enough data to make a significant investment in internal AI resources. Without the ability to bolster the use of internally-developed AI, smaller institutions will feel the adverse effects of AI-enhanced fraud, scams, and financial crime more so than their larger peers who are collecting far more data, far faster – enabling them to detect and mitigate more quickly.

What Has to Happen

To be clear, this isn’t an argument for reducing reliance on AI to detect malicious activity, but rather supplementing it with tools that are agnostic to the use cases to which they are applied, as well as more effective at addressing the threats created by adversarial AI. That can only happen by taking a closer look at fraud- and financial crime-fighting budgets and making decisions that take the long view with anticipated effects of adversarial AI in mind.

Consider that the investments smaller FIs may be considering on newer identity verification and authentication controls may be obsolete sooner rather than later. Instead, bankers should turn to solutions such as behavioral biometric intelligence which can be applied to fraud and scam detection. Further still, despite the advances that generative AI will bring to criminal capabilities, none of them give the criminal an advantage over behavioral biometric intelligence – leaving the bad guys with their new AI toys worse off than they were yesterday.

Quality Over Quantity

The AI Data Gap is real, and the consequences of it will become more dire as the criminal application of AI technology grows. For smaller FIs, their choices are to invest even more in third-party AI solutions and watch their other investments be rendered obsolete, or they can adapt. Applying behavioral biometric intelligence helps level the playing field, making smaller FIs harder targets. And in the difference between the AI haves and have nots, it is the results that really matter – not the hype.

Recent Posts