Over the last year, there has been a major shift in the global cybercrime landscape from traditional fraud-based tactics to scam-based tactics. Fraud is no longer about criminals buying lists of personal data on the dark web and using that information to take over an account. It has become far more brazen, even involving direct contact with consumers using sophisticated voice scams, financial malware, and remote access tools. Last year, BioCatch data revealed that social engineering scams increased 57%, and one out of every three impersonation scams involved a payment over $1,000 USD.

As a result of this shift, legacy fraud controls that rely on device, IP, and network attributes are struggling to rise to the challenge and detect these types of fraud. Social engineering scams are hard to track and prevent because they usually involve a criminal convincing a genuine customer to initiate a payment. And to most fraud prevention systems, the transaction looks authentic.

BioCatch recently hosted two leading fraud practitioners from ANZ, Jessica Bottega, Senior Manager of Fraud Analytics and Amanda Tilley, Application Fraud Analytics Lead. ANZ is a multinational banking and financial services company, headquartered in Melbourne, Australia. The company has more than 50,000 employees worldwide serving over nine million customers in the Asia Pacific, EMEA and North American markets. Here are some key highlights of the conversation.

The Rise of the Scammer

Criminals use a myriad of scam tactics to target victims. One recent scheme that has been widely publicized in Australia is called a money recovery scam. In this scam, criminals posing as an official bank or government representative contact previous fraud victims promising to help them recover funds that were previously lost by paying an advanced fee. Delia Rickard, deputy chair of the Australian Competition and Consumer Commission (ACCC) Scamwatch, recently stated in an interview, “These scams can lead to significant psychological distress as many of the people have already lost money or identity information.” In this scam, victims may be convinced to make the payment directly or asked to provide remote access to their device.

What makes these scams so hard to detect? Well, because all traditional indicators are pointing to it being the legitimate customer.

“Traditional fraud detection methods are ineffective at detecting remote access scams. The scammers connect to the human element and a large amount of time is spent setting the scene with the customer” noted Jessica Bottega.

Behavioral biometrics is one innovative approach that banks like ANZ are taking to combat the rise in scam-based fraud. “With scams traditionally, we think about fraud as just the payment, but in the digital space, it’s login and payment. But in a scam scenario, a session might be 40 minutes of a victim interacting online with the session,” stated Tim Dalgleish, Vice President, Global Advisory at BioCatch. “There’s a whole lot of behavior in between, before the payment occurs, that can indicate a scam is in progress. By weaving this behavioral data into real-time decisioning, you can take this upstream and not focus only on that final payment decision.”

Detecting Good and Bad Applicants

Social engineering scams aren’t the only threat challenging fraud practitioners. New account application fraud continues to be an operational headache, and traditional fraud controls are missing the mark. Know Your Customer (KYC), or knowledge-based identity proofing mechanisms, have been ineffective for some time. Data breaches and phishing scams have generated a wealth of personal data available for sale on the black market, and the ease of searching online public databases and social media profiles for information has doomed knowledge-based methods as too easy to defeat. Device ID on its own is not able to identify a good or bad applicant as a new customer does not have a prior relationship or profile with the organization.

“A lot of application fraud systems as they currently stand are very binary in nature and utilize data points that come from the application itself,” stated Bottega. “Having the information that comes from behavioral biometrics allows you to have insight into how the customer is interacting with the website. This provides rich information you don’t normally get from existing fraud detection systems.” Behavioral insights can offer a very strong indicator of application fraud, for example, if a user demonstrates unfamiliarity with the data they are entering into an application form. BioCatch research shows that 64% of confirmed account opening fraud cases indicate a lack of familiarity with personal data.

Perhaps more importantly is the case of identifying high-risk mule accounts in the application process. This is a process that is more difficult to make the case for as it is difficult to assess an actual loss. “With lending applications, it’s quite easy because you can put a number to it,” said Amanda Tilley. “But for deposits, one of the first questions is how much is it going to save us. It’s more difficult when you have a product that doesn’t have a loss figure attached to it.”

Making the Business Case for Fraud Prevention

This leads us to the golden question: How do you make the business case for fraud prevention? Nothing good comes without a cost and consumer protection is no exception. Costs to the business can include increased operational expenses, losses from undetected fraud, or potential lost revenue from inadvertently turning away a good customer. Costs to consumers can include increased friction when new controls are added.

This balance between customer experience and fraud prevention is a very real struggle that practitioners face with every change they make. In a recent report, Don’t Treat Your Customer Like a Criminal, Gartner noted, “Continuing to insist that customers jump through multiple hoops to prove themselves worthy of a business relationship ignores the reality that customers today have multiple options available.”

This is why getting the business case right up front is so critical for garnering support for new technology investment, and that process often involves multiple stakeholders with their own unique priorities and requirements. “What we started to do when we went on this journey was setting the scene for stakeholders of how behavioral biometrics would co-exist in the fraud toolkit.” said Jessica Bottega.

While measuring fraud savings and losses is fairly straightforward, not every problem being solved for is easy to quantify. Consider the case of mule accounts where there might not be an actual fraud loss, but instead operational costs or reputational risk. In adding behavioral biometrics to identify high-risk applications for new deposit accounts, ANZ focused on operational expenses. “We spent a lot of time forecasting what it would cost us, especially on our downstream operations team,” noted Tilley.

When it comes to preventing online fraud and theft, practitioners can get hyper-focused on technical complexity, financial analysis, maintaining compliance, and operational efficiency. Along the way we can easily forget that the potential victims who we’re trying to protect, our accountholders and prospective customers, are real people. We must remember there is also a human side of fraud.

Learn More

Access the latest research from Aite-Novarica, On the Precipice of the Scampocalypse, which examines the rise in social engineering scam activity globally, the challenges of tracking and measuring their impact and how potential regulatory actions could impact reimbursement models in the future.

Recent Posts