We can and will thwart the fraudsters with a courageous course of action.

Despite record investments in detection and prevention, consumers in the US alone lost nearly $8.8 billion last year. The losses represent a 44% surge from 2021. Global financial institutions, including Wells Fargo & Co. and Deutsche Bank AG, are expressing deep concerns over the impending rise of fraud, labeling it as one of the industry's gravest threats. Moreover, beyond the financial losses incurred while combating scams, the industry risks eroding the trust of disillusioned customers. In the words of James Roberts, who heads up fraud management at the Commonwealth Bank of Australia, the situation is akin to an "arms race" where the industry's triumph remains elusive.

The line between reality and illusion is becoming increasingly blurred in our AI-driven world. Let's focus on a tangible concern - a future where a substantial chunk of our visual and auditory experiences are computer-generated. Within this evolving landscape, a pressing issue is emerging: AI-driven deception. It demands our immediate attention and intervention.

In this intricate web of financial deceit, fraudsters employ an array of tactics to exploit individuals and institutions. One glaring example is the sophisticated Business Email Compromise (BEC) scam. This crafty ploy targets businesses and individuals engaged in legitimate fund transfer transactions. BEC infiltrates genuine email accounts, leveraging methods like social engineering and computer intrusion to execute unauthorized fund transfers. 

But it doesn't stop at transferring funds; it often involves manipulating legitimate accounts to harvest Personally Identifiable Information and even cryptocurrency wallets.

As we confront this ever-evolving threat, understanding the scope of the issue becomes paramount. 

According to the FBI, between December 2021 and December 2022, a staggering 17% increase in exposed global losses was due to BEC scams. This menace has cast its shadow across all 50 states and 177 countries, with over 140 nations experiencing the repercussions of fraudulent transfers. 

In the face of these formidable challenges, this raises crucial questions for AI creators, marketers, and users:

  • Should You Create or Market It? Before releasing synthetic media or generative AI products, weighing the risks against the benefits is vital. Is the risk of misuse significant enough to overshadow legitimate use? As Dr. Ian Malcolm from Jurassic Park pondered, we must consider whether we can create it and if we should.
  • Are You Effectively Mitigating Risks? If you opt to proceed, proactive measures to prevent misuse are imperative. Mere disclaimers and warnings fall short; embed effective prevention in the very essence of the product's design. Strike a balance between functionality and vulnerability to deception.
  • Are You Relying Exclusively on Post-Detection? While techniques to detect AI-generated content are advancing, relying solely on post-detection isn't foolproof. The responsibility should shift from consumers to creators – consumers shouldn't bear the burden of identifying AI-driven deception.

Sometimes, to fight fraud, companies have resorted to video or voice calls as part of the authentication and authorization process. And even with a video conference or a voice call, more is needed.  

Forbes reports that in 2019, with a plot that reads like a high-tech thriller, an unnamed UK-based energy firm's CEO fell victim to a cunning AI-driven scam. Convinced he was conversing with his German boss, the chief executive of the firm's parent company, he transferred €220,000 (roughly $243,000) to the bank account of a supposed Hungarian supplier. Little did he know, he was dancing to a fraudster's AI-generated melody.

The fraudster may have grabbed the German executive's voice from the public domain and used deepfake AI audio to edit it. Regardless of tactic, the fraudsters artfully mimicked the German executive's voice, employing subtle nuances of accent and even capturing the man's unique "melody." Rüdiger Kirsch, representing Euler Hermes Group SA, the company's insurance provider, disclosed these startling details to WSJ.

The heist unfolded across three meticulously orchestrated phone calls. The first initiated the illicit transfer, followed by a second and third call. During the third call, requesting a follow-up payment, the CEO's skepticism was ignited. Detecting discrepancies, he pinpointed an Austrian origin for the call and questioned the legitimacy of the alleged reimbursement. Though he wisely abstained from initiating a second payment, the initial €220,000 had already vanished down a digital rabbit hole.

The labyrinthine trail of the stolen funds was as intricate as the scam itself. The money embarked on a journey from the Hungarian bank account, traversing through Mexico and ultimately being dispersed across various locations.

The deepfake audio leading to the fraud is the first media-reported case of AI voice technology weaponized for fraud. However, this may only be the tip of the iceberg, with the possibility that other instances of AI-fueled deception have flown under the radar.

In a recent Bloomberg Businessweek article entitled, "The Next Wave of Scams Will be Deepfake Video Calls From Your Boss," it mentions that Lina Khan, Chair of the US Federal Trade Commission, cautioned that AI "turbocharges" fraud, urging a heightened level of vigilance from law enforcement.

Before the AI explosion took hold and became accessible to virtually anyone with an internet connection, the world was already grappling with an onslaught of financial fraud. Among the emerging tactics are those that transcend the capabilities of off-the-shelf technology. 

The global cost of cybercrime, encompassing scams and related activities, is projected to reach an astonishing $8 trillion this year. If you were to compare this cost to a country, this surpasses Japan's economic output. By 2025, this figure is expected to soar to $10.5 trillion, representing a more than threefold increase over a decade, according to Cybersecurity Ventures.

Against this backdrop, financial institutions are facing an uphill battle. The banking industry's response involves consumer education about risks and an increased investment in offensive and defensive technology. 

Yet, the challenge persists. As technology evolves, fraudsters adapt. Banks have a dilemma: service customers elegantly and frictionlessly while tightening security prompts to block scammers. Stolen data makes it easy to fabricate fake IDs, and scammers can create AI-generated videos and voices. The question of who bears the financial burden of losses becomes murkier, leading to debates over the responsibility and liability of banks, consumers, and tech companies.

In this dynamic landscape, collaboration becomes our most potent weapon. The responsibility rests on our collective shoulders to combat fraudsters. As we navigate this evolving terrain, an intriguing avenue awaits exploration - the integration of biometrics. Leveraging biometric technology offers an extra layer of protection against AI-driven deception. We can weave an added dimension into our digital interactions by anchoring financial transactions to signal intelligence, behavior patterns, and unique biological markers.

There are compelling benefits of utilizing biometrics to combat deepfake and synthetic fraud:

Uniqueness and Non-repudiation: Biometric identifiers, such as fingerprints, iris scans, and facial recognition, are inherently unique to each individual. This uniqueness ensures high confidence in verifying a person's identity, minimizing the risk of repudiation or denial. This property is potent in countering deepfake fraud, where impersonators attempt to mimic someone's appearance or voice.

Inherent Resistance to Replication: Biometric traits are challenging to replicate convincingly. For instance, copying an individual's precise fingerprint patterns or facial features with sufficient accuracy to deceive biometric systems is extremely difficult. 

Real-time Authentication: Biometric authentication is often performed in real-time, providing an immediate response to verify an individual's identity. This real-time aspect is crucial in countering deepfake or synthetic fraud attempts, as it can swiftly identify anomalies in behavior or presentation that may indicate fraudulent activity.

Multimodal Authentication: Many biometric systems allow for multimodal authentication, meaning they can utilize multiple biometric traits simultaneously. A process can include combining facial recognition with voice recognition or fingerprint scanning. The design enhances security by requiring fraudsters to overcome numerous layers of verification, making the deception significantly more challenging.

Behavioral Biometrics for Continuous Monitoring: Beyond physical traits, behavioral biometrics can play a pivotal role in detecting fraudulent activity. Examples: typing speed, gesture dynamics, and navigation habits. If a fraudster attempts to mimic a person's behavior, anomalies can be detected, triggering alerts and preventing unauthorized access or transactions.

Incorporating biometrics into the fight against deepfake and synthetic fraud empowers businesses and institutions to establish a robust defense beyond traditional methods. 

The time to act is now – let's fortify our fraud defenses against the amorphous and burgeoning threat and safeguard the authenticity of our digital world.

Join Us at BioCatch Connect

Theresa will be joining us on October 11 at BioCatch Connect in New York City. For more information on attending the event and how you can hear Theresa’s keynote, please reach out to your BioCatch Account Manager or contact us today.

Recent Posts