Online platforms in scope of the UK’s Online Safety Act had until 16 March this year to carry out an illegal harms risk assessment – to understand how likely it is that users could encounter illegal content on their service, or, in the case of ‘user-to-user’ services, how they could be used to commit or facilitate certain criminal offences.

As of 17 March 2025, those platforms must start implementing measures to remove illegal content quickly when they become aware of it, and to reduce the risk of it appearing in the first place.

There has been a growing concern in the UK over the last few years about the role that online platforms and telecoms play in allowing criminals the opportunity to socially engineer customers, an activity which inevitably ends in extorting money from the victim in some shape or form. In the worst kind of example, this can be through sextortion – manipulating young people by establishing what appears to be a promising romantic connection through social media or dating apps and then persuading the target to share private images or videos. The criminal then threatens to distribute the material to family, friends, or the public unless a ransom is paid. Tragically, this has led to number of suicides.

Criminals invest significant time and effort - creating fake profiles, grooming victims, exploiting emotional vulnerabilities and building trust. Sound familiar? Of course it does, because these tactics underpin just about every type of fraud we see on a daily basis.

When authorised push payments fraud first became a ‘thing’ here in the UK, the focus was on the banking industry – the narrative was all about how the banks should do more to protect the victims, and to stop mule accounts frombeing opened. Ultimately, this resulted in the mandatory reimbursement introduced last October – an incentive in particular for those banks receiving the funds to be more proactive in identifying mule accounts, as they are now on the hook for 50% of the costs.

However, through the work of UK Finance and its members, evidence was gathered to show how 72% of the scams that result in push payments start online, with a further 16% starting through telecoms networks. (Interestingly, online originated scams tend to be lower value and account for 32% of losses, while those originating through telecoms include higher value cases such as investment frauds and account for 35% of all UK APP losses.) And so, questions started being asked about what these organisations should be doing to help stem the tsunami of fraud hitting UK citizens. There is of course zero incentive for them to act – they don’t lose any money, and arguably in some cases benefit from paid advertising.

Other jurisdictions have been more inclusive. Australia is the poster child for this: the government introduced a Scam Prevention Framework (SPF) which establishes a comprehensive regulatory structure for banks, telecommunication providers, and digital platform services aimed at preventing financial fraud. In summer 2024, the Australian Financial Crimes Exchange (AFCX) announced the completion of the near real-time Intel Loop. This will mean banks, telecommunications networks, internet service providers, and social media companies can share information on scams between themselves more easily.

Meanwhile in the UK, practical progress in collaborative activity has so far been built primarily on a number of small scale projects – coalitions of the willing if you will –set up proofs of concept, establish successes and then scale these up to include more participants. It cannot be underestimated how long it takes to establish trust and understanding between different sectors. Typically, all parties start off with a view about what the other parties should or could be doing to fix the problem – usually simply because neither truly understands the others business model, processes and barriers. At this early stage, it’s easy to get lost in the politics, with both sides fielding policy people rather than real subject matter experts. But if you can get SMEs from both sides to the table willing to keep an open mind, good things start to happen.

One such example is a recently introduced tool developed by a consortium of the UK’s banks and mobile network operators, called Scam Signal. The tool uses an API which enhances the banks’ ability to detect and prevent APP fraud, analysing real-time network data and identifying correlations between phone calls and fraudulent bank transfers.

This has been almost five years in the making.

There is evidence that some of the individual online companies are stepping up. Google recently announced new scam detection features for calls and text messages, and last October, Meta launched its Fraud intelligence Reciprocal Exchange (FIRE), a pilot allowing banks to share intelligence with the company directly to combat scams on its platform.

All well and good, but fraud is constantly evolving, so collaborative work in the UK needs to be more fleet of foot, and this needs a real commitment from the tech companies in particular who are probably about five years behind the telcos in terms of willingness to come to the table. In their defence, they do not offer homogonous services in the same way that telcos do, so solutions need to be more bespoke to each platform. Also, there is a lot of focus on bigger issues such as child pornography which rightly tends to trump fraud when it comes to priority.

With the banks currently footing the bill for reimbursing customers who have lost money to APP fraud, there have been calls for tech companies to cough up for some of the reimbursement costs, as proposed in the Australian Scam Prevention Framework. The logic is sound, but I would prefer instead to see them step firmly into the collaborative space so that scams can be stemmed before they start, not focus on reimbursing after the event.

Will the Online Safety Act provide enough of an incentive for them to be more proactive? Time will tell.

Recent Posts