BioCatch’s 2020 Predictions: 10 Cybercrime and Fraud Trends to Expect in the New Year

Dec. 13, 2019 | by Uri Rivner

As we head into the new year, BioCatch’s Chief Cyber Officer, Uri Rivner, has looked into his crystal ball once again for our annual cybercrime and fraud predictions blog. In 2019, Uri’s predictions proved to be incredibly prescient. You can review Uri’s 2019 predictions here.

Fraudsters show no sign of slowing down, and in 2020, they will continue to adapt to the ever-changing landscape that is the age of digital transformation. Read on for invaluable insight into the top ten cybercrime and fraud trends your business should look out for in 2020.

2020 Cybercrime and Fraud Predictions

  • Deep fake technology will be used for identity theft: Deep fake technology that spoofs the human voice is already being used to attack call centers, or in business email compromise scams. In 2020, we should see the early signs of deep fake being used to defeat face recognition controls, including those using state of the art liveliness tests. The industry will have to come up with silent, behind-the-scenes controls that can offset the vulnerabilities of overt biometric authentication.
  • LiFi networks will be targeted by hackers: There’s a new, promising high-speed Internet technology in town, and it’s visible light based rather than radio wave based. While reaching full commercial use is still a few years away, and the tech is limited to proximity use given physical limitations on light movement, a network based on LiFi should be as hackable as WiFi, and might be more prone to physical interferences. We should see the first demonstrations of LiFi hacks in the new year.
  • UK identity databases will come under attack by fraudsters: Multiple factors will drive criminals that target the UK financial sector to boost their Account Opening Fraud activities; the success banks have in fighting traditional fraud, the introduction of tighter controls over social engineering, and the coming implementation of PSD2 all make account takeover harder for them. To facilitate this expected boost, hackers will focus their attention on UK identity databases, attempting to get multiple data points on each UK citizen in a similar fashion to what had been the state in the US in the last few years. In the US, synthetic identity fraud is the fastest growing type of financial crime, with an average charge-off balance per instance of $15,000, according to a Federal Reserve study.
  • FinTech companies will be fraudsters’ next big target: While banks and credit card issuers in the US have been stepping up their defenses against account opening and account takeover fraud, the fintech sector, which has largely escaped the wrath of fraudsters, will begin to see a sharp increase in online fraud. Because they are less heavily regulated, fintech companies are more agile and able to introduce new functionalities. However, the lack of proper defenses and the fact that they have no access to the banking sector’s fraud consortium databases will make them far more exposed.
  • Chatbot and voice assistance payment fraud will rise: Many financial institutions are beginning to deploy AI-based customer assistance tools, such as chatbots and voice based interfaces, to broaden their offerings beyond traditional online and mobile channels. As soon as those new channels begin to offer full functionality – say, move money from a user’s account – they’ll be targeted by criminals and will need to be protected against account takeover. Researchers have already proven that lasers can be used to spoof voice commands in physical voice assistance devices, and it would be even easier to attack their virtual equivalents.
  •  eComm fraud AI models will become half-blinded: One of the unspoken secrets of AI is that it’s only as good as the tagged data that is fed to it. With the increase of account opening fraud, a huge amount of eComm fraud is going to come not from compromised credit cards, but rather new credit and debit cards that are opened online using identity theft. In these cases, there are no chargebacks, as no real user will call to complain. The result is that AI models will become half-blinded. The criminal patterns that AI models use to pinpoint fraud will be suppressed by genuine confirmations after account opening, as criminals use the fraudulent account to make purchases, just as a genuine user would.
  • AI will help prevent subscription services fraud: The big content streaming companies have formed an alliance designed to fight password sharing and criminal offerings of compromised passwords. Unfortunately, device-based and location-based controls are no longer holding as technologies to spoof devices and geo-location are readily available. New technologies such as behavioral biometrics and unsupervised anomaly detection AI will prove to fare much better against misuse of subscription services.  
  • Zelle fraud levels will surge: As many regional banks and credit unions are adding Zelle P2P capabilities to their online and mobile banking, criminals are beginning to single out the US as a new land of opportunities. Well-proven social engineering techniques are already in use, and attacks will escalate and quickly adapt as new controls are added – with the result of real users suffering from higher friction while fraud levels surge.
  • Selfie biometric data will be the new dark web money maker: There’s already a vibrant dark web trade in personalized biometric data, and that will continue to grow in 2020. More websites and applications are turning to selfie-based verification and more online account opening flows are moving from obsolete controls, such as Knowledge Based Authentication, to more modern controls, like selfie-document matching. Some criminals will focus on collecting data from open sources and social media. Others will target – and already have targeted – users in phishing campaigns designed to steal not just static credentials, but also selfies and videos of the user’s face. Another threat is that advanced malware capabilities, which are currently in the hands of state sponsored actors and other high-end players, will find their way to criminal hands and be used to break into mobile device authentication.
  • Money mules will become an endangered species: In an era of easy account opening fraud, why spend resources and take unnecessary risks by interacting with mules? Money mules won’t go away in 2020, but criminals engaged in cashing out compromised bank accounts will begin shifting away from classic recruitment options and start using falsely opened bank accounts instead. The ease of fraudulent account opening will also help other crimes, such as money laundering and impersonating the receiving end of P2P money transfers like Zelle.

At the core of our cybercrime problem is a lack of effective methods for establishing and verifying digital identity in the constantly evolving digital ecosystem. New solutions are addressing the challenges, replacing outdated approaches that rely on static information with much more effective, multi-factor tools. 

Though emerging solutions leverage sophisticated models to analyze data about the user and their behavior, it’s critical that the data collected for authentication cannot be stolen or replayed. Device ID, for example, can be stolen and location data can be spoofed. As long as data is vulnerable, criminals can commit fraud. The one type of data that is unique and unable to be replicated is the way users behave and interact within a system or session.

Organizations that are fastest to act with new, powerful, cutting edge fraud prevention tools are the ones that will be least affected by fraudsters in 2020 and beyond. One of these solutions, and the one that Uri Rivner is focusing on, is behavioral biometrics. 

Want to learn more about how to stay ahead of cybercrime trends with this exciting new technology? Read the data sheet here.

Topics: Fraud, Cybersecurity