Recently, I presented at the Datos Insights Financial Crime and Cybersecurity Forum on the Biocatch panel discussing digital identity. The focus of the session was on the role of personally identifiable information (PII) in the context of digital identity and how we move away from it. 

The same is true for PII data. The customer or fraudster enters data into the application, but we really don’t know if it is a customer or a fraudster. So, we must deploy defense in depth to assess the PII data and any other data that is associated with the application. 

This is a challenge that I think can be solved in 2023. So, let’s start by looking at what these layered controls look like in a new account fraud scenario. 

There are a number of controls that can be implemented and tested to address the problem. Over a 1–2-year period, the fraud team can assess each control to validate the independent lift of each control (how good is the control by itself) and drop any controls that don’t add enough value. 

The first control is to determine if a human or a bot is entering the data. There is so much bot activity out there, and this became especially apparent during the pandemic where some banks were seeing up to 50% of new account openings initiated by bots. Thus, the very first control in the funnel should be to cull out the bot applications. This can be a vital cost-saving measure as the more applications you can cull out early, you don’t pay for the entire control stack for each application. 

The second control is analysis of the address, phone number, and email address data fields.

1.    Do they match the other PII data (as seen in other activity on the internet and in selective databases?
2.    Do we see these data elements associated with other known fraud (consortium data)?
3.    What type of phone number is it?  Mobile phone numbers are best. Fixed VoIP phone numbers (e.g., Comcast) can also be good. Non-fixed VoIP should be prohibited. Landlines still need to be allowed because the elderly can still have them. Be very leery of a text message capability tied to a landline.
4.    Does this applicant have access to the phone number? Send a text message to the mobile phone and have the person click a reply back button. This will confirm the person has access to the phone number being supplied. Plus, the ‘reply back’ text message can confirm the general physical location of the phone (e.g., city of Charlotte). And does that location match the address of the application?  Also, check for a SIM swap or recent phone port. It may be the customer’s phone number, but the fraudster has taken it over as part of this application fraud. 
5.    Look at the email address for obvious signs of fraud (e.g., recently created, not really used, odd address like nnnn31@yap.com, etc).


If you collect the person’s DMV information, you can run a DMV check about 50% of the time. 
You can also run a Social Security number (SSN) check using eCBSV interface with the Social Security Administration (name, address, and date of birth matching). This control may be more effective if you are concerned the application data is a synthetic ID as there can be a high level of mismatches.
 
You may also want to do a driver’s license or passport document verification. This requires a camera on the PC or mobile phone (always there). Some of the vendors are quite effective at distinguishing good documents from photos of the original document (stolen from customer’s devices) or counterfeit documents (easily and cheaply available). And today, the vendor has to detect GenAI-created documents. Along with the document, you need to do a selfie and liveness test. But remember, a fraudster may hire a homeless person near a branch and create a driver’s license with that person’s photo and the stolen PII. (As a side note, every branch should also have a document reader to help detect fraudulent documents, and all the controls mentioned here should be applied to branch applications as well.  Walk-in applications can be just as fraudulent.)

During the panel, I received a question from a regional bank asking if checking a driver’s license creates too much friction. My answer is in two parts. In the old days, the customer gave up 1-2 hours to come to the branch to fill out an application. And if you are worried about too much friction, consider the driver’s license as a step-up authentication when the other controls show uncertainty about the applicant. 

There are a few other controls to consider. First, a very important control is to analyze how the person is entering the data (behavioral biometrics). It is common to see fraudsters use shortcuts or copy and paste when entering data. You will also see segmented typing when entering certain data such as the SSN (something a real person knows from memory). Another behavior to look at is navigation patterns – how familiar the person is with the application process when entering the data. These are only two examples among several additional behavioral tests that can be applied. 

A second control is to see if the data is available on the dark web. Is the data available from an info stealer malware that captures almost all of your data from your device (passwords, websites you go to, credit cards you use online, etc.)?  Is the data part of a recent data breach?  If there is a match with this data, it is more of a warning than an absolute turndown. 

A third control is doing link analysis of the network data (IP address, device fingerprint, etc.) against any previous fraud in your organization or in conjunction with one of your vendors offering consortium data. 

With the controls above, you will catch most fraudulent applications. But not all. And as for synthetic IDs, we will save that for another discussion.

Once a new account application is accepted, there is still more assessment. For example, watch out if the contact information is quickly changed. New accounts should also be monitored for ongoing suspicious account activity that could indicate it is being used as a mule account (e.g., multiple logins and then a large deposit followed by a rapid exfiltration of the money).  You can also use behavioral biometrics to look for changes in user behavior that are different than the historical profile for the user.  Many of these fraudulent new accounts are being set up to become a money mule. 

Conclusion

I hope this blog helps you understand how you can control the possibility of receiving stolen PII in an application using a Zero Trust approach.  Best practice requires a risk assessment at both account origination and post-account opening. Financial institutions should deploy selective layers of security when the new account application is submitted and monitor the account activity after the account is opened.  And remember, even though you have a number of controls, you may not need to deploy all of the controls for every application.

Recent Posts