Episode 4: Falling Victim to Scams

Posted by:

BioCatch

Episode Description

The fourth episode of Digital Tells: A BioCatch Podcast focuses on scams and social engineering. Why is there so much scam activity these days? Why are these scams so successful? And what, if anything, can financial institutions do to help protect themselves and their customer?

We open with a first-hand story of a brilliant social engineering, told by Coby Montoya. Tim Dalgleish discusses some of the Digital Tells that may indicate scam activity. And Ayelet Biger-Levin explains the layers of machine learning and analytics that converge to detect scams.  

Transcript

Coby Montoya 

Sometime in fall in 2019, I was working from home. Middle of the workday, I received this text message from it says it's from Capital One. It says, Hey, there's been a charge on your card at this Wal-Mart in California. Is that you? Yes or no? I said no. I responded back. And I immediately took out my Capital One card and looked at the back, called the phone number in the back to report this, you know, hey, this definitely was not me.

Peter Beardmore 

The voice you were just hearing if from a gentleman I met recently. He’s a professional, in his late 30’s, lives in Arizona.. and the scenario he’s discussing not uncommon. It’s happened to me. It’s likely happened to you… but this story gets interesting….

Coby Montoya

 So as I'm on hold, I receive additional texts. Says, OK, this wasn't you type one if you would like someone to call you instead of having to call and sit on hold. So again, middle of my workday, you know, this is much easier for me than having this on hold. So I said, Sure, have someone call me back a few minutes later, actually, probably less than a minute. I receive a call back and the phone number was, you know, same phone number for my Capital One card. And so I let them know, Hey, this was not me. You know, I identified myself. I authenticate myself, providing some basic information, and they say, OK, well, we can send you a replacement card. We'll deactivate this one. It's going to take about three to five business days to receive this card. It's actually a card I use very frequently. So I ask them, Hey, is there any way you could send this out sooner? They said, Well, we can. There's a fee that comes with that, but you know, you just experienced fraud. So we're going to go out and waive that fee for you, right? All right. Great. Good experience. And I appreciate it. And so they said before we send this replacement card out, however, when to need you to verify the address we're going to send it to. We're also going to send you a one time passcode just to ensure that it's really you were speaking with. And so I said, OK, you know, waited for the code. Thirty seconds later, I receive a one time passcode to my phone number. I read it back to them. They said, Great, thanks. We're going to go ahead and send this card out to you. And that was that. So I thought, All right. Minor annoyance. You know, no one likes fraud on their card, but it took me about 10 minutes, three songs. So I thought, All right, I'm good to go. Later that evening at home watching TV with my girlfriend and all of a sudden I received a notification from my Capital One mobile app that says there's been a charge at a Wal-Mart in California. So I'm based in Arizona, right? So this is definitely not me. It's kind of annoying that, you know, I was like, Hey, we just talked about this. I just resolved this with Capital One. Why are they approving charges on a card that has been reported as compromise? So I called CapitalOne ready to be, you know, just explain to me, Hey, guys, you shouldn't be approving these these charges. I reported this as a as. And as I speak to someone, they say we don't show any sort of interaction, so we contacted you at all today. I go, Are you sure about that? Yeah. Let me check a different system. They check a different system. No, no interactions. And so I'm a little skeptical. I just talked to someone hours ago. This, you know this. This can't be the case. I know I spoke to you guys. So I ask, Okay, I know you're doing the best you can do, but can I please talk to a supervisor? Maybe they have access to a different screen that you don't have access to, you know, respectfully. And so I'm, you know, hold for another 15 20 minutes, but not speak to a supervisor. Same situation. So I go, OK, is there someone in like a security risk fraud department? OK, yep, I'm transferred there. Same thing. So I learned that they actually didn't contact me. So I'm really puzzled by this, and I realized sort of in real time that actually a fraudster, you know, a bad actor or criminal actually contact me to essentially social engineer me.

Peter Beardmore

So, obviously, this was a pretty good scam. And, I mean, stuff like this happens every day right? But here’s the really interesting part… That voice you were just hearing is Coby Montoya and well, let me let him introduce himself…

Coby Montoya

My name is Koby Montoya. I work in fraud and security, and so I've been doing so forabout 15 years now, and I've worked on the merchant side. The card issuer, side,payment network side and the front vendor side. So I have a fairly broad lens when itcomes to fraud risk management.

Peter Beardmore

Coby’s actually being modest. He’s a fraud expert, who’s worked for some of the biggest financial services companies in the world, helping them to manage risk and fight fraud. 

So the next time it happens to you – or when your elderly mother tells you she got scammed again – go easy on her. It happens to even the best. In fact social engineering scams are on the rise globally. According to the U.S. federal trade commission, imposter scams were the #1 type of fraud reported by consumers last year. And most of these scams were carried out over the phone – with losses reported around $30 Billion.

A few weeks ago I registered some domain names in preparation for launching this podcast, and for some reason I didn’t get the privacy settings right. About a week later my phone started ringing off the hook – at least a dozen calls every day – about half from would-be web developers looking for work – the other half, dire warnings about my fraudulent payments made from my accounts, messages of accounts past due, a lottery win, and a few were for what would apparently be life changing opportunities.

Why is this all happening? In some cases it’s pandemic related. But mostly, it’s just because it works. And as evidenced by Coby’s story – scammers are convincing and very clever. And they prey on human nature and emotions.

And for financial institutions this is a major problem. Because in some cases they’re liable to refund losses to consumers – in other cases they’re not, but might still make refunds anyway… just to protect their brand and their relationship with the customer – and in some cases the guidance from regulators is changing.

In this episode of Digital Tells we’re focusing on scams and social engineering. Why are they so successful? And what, if anything, financial institutions can do to help protect themselves and their customers.

In previous episodes you met Tim Dagleish from BioCatch – he’s been working with financial institutions throughout his career, and has a deep understanding for what’s crucial for banks when dealing with reported scams. 

Tim Dalgleish

So, yeah, there's a whole remediation process with a financial impact operation impact and customer experience impact because its customer life cycle. When you're a victim of fraud, it can go one of two ways. As a banking customer, as a bank, if you do a great job in looking after that victim and getting them back to to normal or protecting it, then that's the storey they tell at every barbecue for the next six months and tell their friends. If you do a bad job of it, that's a bad story that they're going to tell to their friends and they might even change banks. So it's a really, you know, life cycle with a relationship with a customer. If you get from right or wrong, you can typically go one or two ways. So it's really a critical, critical point in the relationship.

Peter Beardmore

So I want to stipulate for a moment here, particularly with respect to scams and social engineering that there are hundreds, if not thousands of angles that scammers can take. And they’re constantly iterating and improving. Some are better than others, you just heard one – we could share dozens of others, but this is supposed to be a twenty minute podcast – but what I’m getting to here is there’s a myriad of cash-out schemes  from scam to scam. In some cases the scammer may literally lead their mark to transferring money, maybe even coach them how to do it – without ever getting a credential or gaining direct access to an account. In other cases – those credentials or account numbers are exactly what the scammers are seeking – and then that leads to the Account Take Over Fraud and Account Origination fraud issues we discussed in previous episodes.  And in still other cases… the scammer convinces the victim to give them control over their account, in the middle of the session, after they’ve already legitimately logged in. Maybe get them to download a Remote Access Tool or malware – giving control of their phone or computer to the scammer ~ essentially giving them free reign ~ but following a perfectly legitimate log-in, from a known device, and probably from a known IP address. 

With all these potential combinations of interactions and outcomes, you it might be hard to believe that there’s some magical algorithm to throw the brakes on any of it. And you’d be right. But let’s think a little more deeply about this. About what the victim actually experiences.

Let’s say you’re on the receiving end of a scam? Maybe someone’s contacted you, they say they’re from your bank, it’s about some suspicious payment activity – and they get you to log-into your account – and then they ask you to do something you just can’t figure out how to do. You get frustrated, you’re worried that your money’s been stolen, and to be helpful they suggest you download a tool that will help them to help you resolve it. 

OK, so obviously… that’s not a good outcome…and maybe the scam doesn’t even go down that path…  but I want you to think about how you’d behave in that moment. I mean you’re experiencing something that’s out of the ordinary… what would you be doing on the screen – with your mouse – or your keyboard – or how would you be handling your phone? What might be some of the Digital Tells that could indicate that scam activity was underway?

Here’s Tim Dalgleish again.

Tim Dagleish

So if you think about it, you know, when I normally do my banking as a person, I know why I'm logging in. I'm looking in to check a statement, paying my electricity bill, transfer money to my friends, behaving with the intent. I know what I'm doing it what we see scams is that they're being coached. And that manifests itself in what I would call little breadcrumbs of behaviour. If I'm on the phone to a scammer, they say log in and I log in and then I'm waiting there on the page. And they're so trying to social engineer and give me instructions next thing. Now, from a behavioural perspective, I'm no longer behaving with intent. I log in and then I do my mouse lost scam scammers on the phone to me, instructing me what to do or convincing me to do the next thing. So, you know, when you're being coached through a banking session, it looks much different from a behavioural perspective than when you're doing it yourself with intent. So that's really the power of being able to understand the customer's behavior in a really granular detail.

Peter Beardmore

So there are these subtle indicators… but you maybe thinking… as I have… “so what if I pause a little, or doodle with my mouse. I do that all the time. How’s anybody going to put that all together and conclude there’s a scam in progress?” If you listened to episode 1 you may recall the science fiction conversation we had with Howard Edelstein, and his point about finding the data – using machine learning to identify the subtle patterns – connecting the dots (so to speak) – and artificial intelligence to connect those patterns in real time. No single indicator is particularly strong in and of itself, but it’s the collection of indicators combined with data and analysis from millions of other banking sessions that can lead to reliable conclusions. 

We met Ayelet Biger-Levin in previous episodes. She and I were discussing that phenomenon of bringing Science Fiction to reality I asked her to talk more about BioCatch AI and machine learning, and how behavioral biometrics actually comes together.

Ayelet Biger-Levin

BioCatch leverages supervised machine learning, and they've been asked a lot about kind of the difference between that and deep learning. So deep learning is really throwing a bunch of data at the machine and saying, OK, group it into groups so we can find the differences and anomalies moving forward. So, for example, if you take flowers and you throw at the machine all these different characteristics about flowers, the shape, the size, the color, the smell, then it will be able to group into flower groups and families. And then when you get a new one, you can say, OK, this belongs to that or it's abnormal to this group, etc. But when it comes to fraud, you don't just want to throw data at the machine and group that you want to find those specific characteristics that will correlate to fraud or genuine. And what we do is we take data, for example, the user interaction data, and we say, OK, what in this data can help us correlate with fraud? In order to do that, we need to look at known fraud cases and known genuine cases and then attribute those cases to the data and say, OK, here's what we've learned from all the fraud data. Here are the patterns that correlate with the fraud data and are different from the genuine data. That's how we can say that if we see signs of low data familiarity that has high correlation to fraud, because we've seen that in 64 percent of all the fraudulent cases, but we see that in none of the genuine cases. So that kind of helps correlate the known incidents for every customer. So we have a general model with all the learnings that we have over the years, and that's why having 10 years of data is very, very powerful, because we've learned we've seen how the mode of operation evolves over the years and every customer has unique MOs as well. So the ability to tune things according to their confirmed fraud cases and they're confirmed genuine cases helps us learn over time.

Peter Beardmore

Now, that the modeling approach that Ayelet describes applies to both the strict definition of fraud – where a cybercriminal is actually impersonating the victim – and scams – where a social engineer is manipulating or tricking a victim to do something bad. But from a modeling standpoint – it’s effectively the same approach. And that’s good because as we illuded to earlier – a lot of times scam and fraud activity overlap. What starts as a scam, can quickly switch to fraud. 

So let’s quickly review that BioCatch process – as we discussed briefly in episode 2. The first layer in the model is looking at the user’s own behavior profile. How has that user acted in the past, compared to now? The second layer compares the user’s behavior to that of the broader population. How does mouse doodling correlate to fraud? How do long sessions and pauses correlate to genuine sessions or known fraud cases? And the third layer, and this is like the holy grail of scam detection, – is we put those first two layers together – so when you have a scenario where you’ve got a legitimate user – but that legitimate user’s behaviors indicate scam or coercion activity – the bank can actually function as a last line of defense – and help protect us all – but especially vulnerable folks from scams. 

One other important point here is that this seemingly hair-splitting difference between scams and fraud activity – it actually does matter. Often times, an investigation’s determination on scams or fraud may mean the difference between whether the bank is required to reimburse the customer or not. Historically frauds (and this may be an over-generalization) – but fraud transactions are typically reimbursed. Scam activity – where the user is tricked to transfer funds on their own, or to knowingly share credentials and passcodes – not so much. But those tides are also shifting. In the UK, the Financial Ombudsman Service is finding in favor of fraud and scam consumer complainants more frequently than in previous years. In the U.S. the Consumer Financial Protection Bureau recently amended its guidance on those scams where the fraudster tricks the victim into sharing their OTP code – this is exactly what happened in the scam Coby Montoya fell victim to. Now victims are covered in that scenario in the U.S.

So this is another potential loss channel for financial institutions, which only increases the incentive to better detect scam activity. 

Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music, and with the unwavering support and sponsorship of my employer, BioCatch.

Special thanks to Coby Montoya – Coby wrote a blog about the experience he shared at the beginning of this episode. You can find a link in our shownotes. Also thanks again to Tim Dalgleish and Ayelet Biger-Levin.

For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast.

Join us for episode 5, in which we’ll explore Mules. Are you a mule? Truth is, you may not even know. More importantly, why should financial institutions care about mule activity? And what can be done to detect it?

Until then, take care.

Related Podcasts