UK Finance, the trade association for the UK banks and financial institutions, released its Annual Fraud figures for 2024 at the end of May. This prompted me to reflect on how much the fraud landscape has changed since I was first involved some 25 years ago. I was reminded that back in the year 2000, we were obsessing over card fraud – indeed, card and cheque fraud was all we compiled at industry level. Card fraud then ‘peaked’ at £317m.
We spent the first half of the new millennium focussed on rolling out chip and PIN to counter the then dominant problem of counterfeit magnetic stripes and card ‘cloning’. One key issue that dominated the debate was that we could not, and should not, expect retail staff nor customers to be able to distinguish the fakes from the real thing. Transaction monitoring was in place, but we had been asking retail staff to visually check the card’s security features at POS and asking customers to check their statements for unusual activity.
Fraud fast forwards 25 years
Fast forward 25 years and contrast the relative simplicity of the fraud landscape then with complexities of the current landscape. In 2024, card fraud alone was £572.6m out of a total of £1.17bn, with £450.7m in authorised push payment fraud and the rest being a mix of unauthorised remote banking and cheque fraud.
In that 25 years, we have seen an exponential growth in technological development which has fundamentally changed the way we live.
In 2022, Max Roser, Founder of Our World in Data, wrote a number of excellent articles about this phenomenon.
“Technological change was extremely slow in the past – the technologies that our ancestors got used to in their childhood were still central to their lives in their old age. In stark contrast to those days, we live in a time of extraordinarily fast technological change…..For recent generations, it was common for technologies that were unimaginable in their youth to become common later in life.”
The development of AI can only serve to accelerate this exponential growth to the point that it will fundamentally change the world in the same way as the agricultural and industrial revolutions did.
Certainly when growing up, I did not imagine a world connected by smartphones and the Internet.
Technology has changed the ways we pay, making payments faster and putting the customer in control. I can recall early ATMs had a one time use card which allowed you to withdraw a sum of money. The ATM retained the card, and you got it back in the post a few days later. That system sounds so ludicrous now that I almost think I dreamt it – science fiction in reverse, if you will.
Now, I can set up and make payments through my banking app in seconds. Naturally, technology also changed the ways in which fraud is committed, how we fight it and the numbers of parties in the ecosystem that are involved. Security and detection systems were improved to the point that the fraudsters started to target the weakest link in the system – the human. Enter the world of authorised push payment fraud.
When people are the problem – and the solution
I recently attended two conferences, Biocatch Connect and the UK Finance Key Conversation, and the themes emerging from the speakers have caused me to reflect on how we humans can be both part of the problem and part of the solution in fraud. Here are some examples of human ‘problems’:
- Social engineering continues to form a big part of the way in which fraud is committed. We post all sorts of information on social media and criminals use this combined with other information gleaned from the Dark Web to target the most lucrative victims. This is particularly playing out in investment frauds. More ‘bespoke’ attacks means it becomes more difficult to offer general education about how to avoid being de-frauded, and it’s more difficult to detect when a fraud is occurring.
- Human issues are not always about the end customer: organisations have their own silos and are bound by human decisions on regulation, risk and structure. One speaker talked about the move in many banks to merge cyber and fraud departments in the interests of shifting the focus left up the ‘killchain’ to detect and prevent fraud earlier. The main challenges experienced in doing so were not technological, but ones of implementing cultural change and getting each group of people to understand each others ‘language’.
- We used to consider money mules as just being split into complicit and non complicit – they became money mules either in full knowledge of what they are doing , or in ignorance. However, it’s now clear that many mules are being exploited by others - with children as young as 10 being used to open an account. This has implications when considering the reliance on digital ID when the owner of that ID is being exploited or influenced by a third party.
We must certainly continue to fight fire with fire in the race to keep up with technological change and its impact on fraud. But to think that will be the only solution is a mistake. Human intervention remains key in a number of areas. Here are three examples:
- Human intervention is needed when you want to ‘break the spell’ of someone who has been socially engineered. Criminals use all the manipulative skills at their disposal to groom their targets in romance and investment scams
- Human intervention is needed when customers want to report fraud. When you have a problem, you need to be able to speak to someone quickly and easily, who has some empathy and understanding of the situation. Those institutions who rely only on remote communication have found themselves in the media spotlight in recent months, and not in a good way.
- Human empathy is needed to support and help victims of fraud, to recognise and respond when there are signs of exploitation and vulnerability.
In the complex world of false profiles, ads and IDs, being able to distinguish the real from fake becomes increasingly important, yet increasingly impossible for a human eye or brain to detect. Having technology which is able to determine in real time what signals genuine behaviour and activity versus fraudulent activity is therefore crucial. However, human interaction also remains critical when it comes to communication with and support for victims of fraud.