Criminals are rarely followers. Often, they are among the earliest adopters of new technologies. Free from the constraints of legitimate businesses, they operate in a world where no one is afraid to fail. Ethics and the legality of their operations are moot points.

So-called agentic AI – a concept that goes beyond mere assistance with information or text generation – refers to AI systems capable of autonomously planning and executing tasks. These systems actively drive decisions and actions, moving beyond passive support.

Gartner’s predicts agentic AI will autonomously resolve 80% of customer service issues without human intervention by 2029.

Consider for a moment if the approach adopted by criminals is really all that different to the way your financial institution services its customers. In fact, it often benefits criminals to emulate a bank’s customer experience.

If the adoption curve for legitimate businesses is predicted to result in widespread utilisation by 2029, we should probably assume the criminal fraternity will find ways to use this technology even sooner.

The war is coming.

The financial services sector is no longer a business that relies on a brick-and-mortar presence. Regardless of the business driver, the front door to the sector is typically a remote channel.  

While remote channels provide threat actors with a sizable attack surface, the arrival of agentic AI is both an opportunity and a threat. As with all conflicts, there is a choice to be made: Essentially, we need to decide if we’d like to fight a symmetrical or asymmetrical conflict.

The front door: Account opening (AO)

Effective onboarding is a key defense in the war against criminal threat actors. Agentic AI will provide criminals with an opportunity to scale up their efforts to infiltrate the financial system.  

Existing Know Your Customer (KYC) processes, document verification, and biometric authentication will remain core to this securing of the front door, but we should anticipate a significant shift in the scale, effectiveness, and sophistication of attacks.

Opportunities:

• Improved risk assessment: Agentic AI can analyze vast amounts of customer data to identify fraudulent applications. It achieves this by detecting inconsistencies in submitted information, device fingerprints, and behavioral biometrics.

Adaptive verification measures: AI-powered identity verification adapts dynamically to the applicant’s risk profile, only requiring additional authentication steps when anomalies are detected.

Challenges:

• Synthetic identities: Fraudsters can leverage agentic AI to create, test, and dynamically adapt their approach to the creation of fake, high-quality customer profiles. Reducing the cost of failure and crucially staying one step ahead of a financial institution’s evolving control framework.

• Exploitation of AI-powered onboarding: While AI can strengthen a bank’s defenses, it can also prove to be its Achilles heel. You’ll recall that agentic AI agents are objective-based and largely autonomous. Consequently, an AI agent tasked with opening accounts using customer profiles based on false or partially true data will seek to identify and bypass your controls.

When they have the keys: Account takeover (ATO)

ATO remains a significant threat to the sector, with retail banking customers vulnerable to increasingly sophisticated social engineering (phishing and vishing) attacks and malware. As with account opening, agentic AI brings both new opportunities to defend the ATO channel and new risks to it.

Opportunities:

• Real-time behavioral analysis: AI-driven fraud prevention systems can detect unusual login behaviors, such as access from unusual locations, rapid credential attempts, or deviations in typing speed and mouse movements.

• AI-powered authentication enhancements: Your approach to authentication can be enhanced by integrating AI, which dynamically assesses risk levels and addresses them dynamically through “step up” verification.

 • As agentic AI enters the consumer market, we will also need to consider the risks associated with retail banking customers seeking to access their accounts using such agents.

Challenges:

• AI-augmented phishing attacks: Fraudsters can use agentic AI to craft hyper-personalised phishing emails and voice calls that mimic a bank’s communication style, making it harder for customers to recognise scams. Researchers found a 135% rise in novel social engineering attacks among their customers between January and February 2023. The authors attribute this rise to the widespread adoption of LLM’s that enable cybercriminals to launch sophisticated social engineering campaigns at scale.

Bypassing AI-defended logins: Advanced AI can analyze and predict security questions, CAPTCHA responses, or biometric weaknesses, potentially allowing fraudsters to circumvent AI-powered defenses. While it has been necessary for some time to differentiate between human and non-human users, the arrival of agentic AI will increase the necessity and complexity of doing so without the use of behaviour. 

The human factor: Our growing scam threat

Our existing efforts to secure the financial services sector have already caused threat actors to pivot. The growth in fraud involving customers authorising a payment is evidence of this shift. Agentic AI will empower fraudsters to craft still more convincing, scalable, and cost-effective campaigns against retail banking customers.

Opportunities:

• Enhanced scam-detection models: AI can identify scam-related patterns of behavior in digital channels and challenge customers in real time. The UK’s Payment Systems Regulator has already required Payment Service Providers (PSPs) to provide customers with a clear assessment of why their payment might be linked to a scam. Banks such as Australia’s NAB are already utilising such signals to dissuade customers from making payments, resulting in a significant reduction in losses.

 • Voice and video analysis: AI-powered fraud prevention systems can analyze call recordings and video interactions to detect deepfake-based impersonation scams before victims transfer funds. The embedding of such technology into smartphones is an encouraging development, with Google deploying scam detection to its Pixel devices. This feature automatically runs in the background of calls that could be scams. If the likelihood of a scam is high, the device alerts the user with notifications, sounds, and vibrations.

Challenges:

• AI-powered social engineering: Agentic AI can conduct highly sophisticated automated scam operations, pivoting when returns decline, enhancing its approach as controls are put in place. Such campaigns will likely focus on the exploitation of trusted parties, and they will do so with convincing precision.

 • Automated fraud networks: Fraudsters can use AI bots to engage with victims over a long period of time, gradually gaining their trust before seeking to exploit them. Crucially, as with legitimate businesses, agentic AI offers criminals the opportunity to improve their return on investment. Lower costs and higher returns are the order of the day.

Mule-herding: The flipside of the scam threat 

Without mules the opportunity for criminals to commit fraud is significantly curtailed. Agentic AI will provide criminals with new opportunities when it comes to the recruitment, management and exploitation of complicit and non-complicit customers.

In many cases the conflict between mules and financial institutions remains asymmetrical, but it doesn’t have to be that way.

Opportunities:

• AI-powered transaction monitoring: By analyzing account activity in real time, AI can flag suspicious transactions, such as rapid fund transfers across multiple accounts, which are characteristic of money mule behaviour. 

• Behavioral biometrics for mule detection: Artificial intelligence can analyze transaction behaviours and flag accounts with inconsistent usage patterns, suggesting potential mule activity.

 • Ideally banks combine both approaches alongside robust device profiling, with consortium analytics enabling these signals to be leveraged by sending and receiving institutions.

Challenges: 

• AI-evading mule networks: Fraudsters can exploit AI to mimic normal banking behaviors, making mule accounts appear legitimate for extended periods of time. We’re already seeing improved bank controls leading fraudsters to pivot to this strategy.

• AI-recruitment: The technology will empower criminals to seek out and qualify suitable mule candidates, with AI assessing the prospective mule’s transactional profile against the likely proceeds of a crime, taking a financial institution’s control framework into account when deciding which mule to leverage.

• AI-generated identity laundering: Returning to the issue of AO, there is little doubt that criminals will use AI to create digital identities to bypass KYC checks, allowing them to set up mule accounts with greater ease. 

Balancing current and future threats

This is not a case of if but when. The efficiencies and opportunity for improved ROI offered by agentic AI are as relevant to criminals as they are to legitimate businesses.

Asymmetric engagement is a conflict between threat actors whose relative strengths, strategy, and tactics differ drastically.

The degree to which this conflict proves to symmetrical will depend on how the financial services sector responds. A symmetrical response will require the sector to rapidly adopt new technologies to detect and mitigate risk across the lifecycle of a customer. Crucially, that technology should also support enhanced collaboration between banks, regulators, and customers.

This is ultimately a fight between rulebreakers and rule-followers. As fraud evolves, so too must the strategies to combat it, ensuring AI remains a force for protection rather than exploitation in the digital banking sphere.

Recent Posts