During the last few weeks of 2024, we saw several steps indicating that social media companies are joining – or being drafted into joining – the war on fraud and child exploitation. In November, Meta announced that it had taken its first major steps into addressing “Pig Butchering” scams including taking down more than two million accounts connected to criminal networks involved in the schemes. In early December, it was reported that Telegram has agreed to work with an internationally recognized body to stop the spread of child exploitation material. And on December 16, 2024, Ofcom, the UK regulator for communications services, published its first codes of practice and guidance for Protecting People from Online Harms. Banks and other financial institutions welcome the participation of social media companies in these counter-fraud efforts.
For the past several years, the focus of efforts to combat fraud has been on financial institutions because they are the point where funds are transferred from the victim to the fraudsters – generally through an intermediate account called a “mule” account. In some cases, funds are sent directly to fraudsters. In other cases, victims are asked to deliver cash or gold bars to the fraudsters – usually through intermediaries. In yet other cases, victims are asked to withdraw cash from their bank accounts and send crypto currency to the fraudsters via crypto ATMs.
When the victim – or members of the victim’s family – discover that the victim has been defrauded, their first contact is usually with the bank. This is for two reasons. First, the bank is holding the victim’s funds and has allowed the funds to be transferred or large amounts of cash to be withdrawn. The second reason is that banks have a deep pocket, and people look to banks to make the victims whole when the bank’s customer is defrauded.
Banks can always do more to protect their customers. In cases involving hacking, account takeovers, and even check fraud, banks are responsible for protecting the funds and accounts of their customers. But in cases involving “scams,” where people are enticed to send money to people voluntarily, the banks’ mission becomes one of protecting their customers from themselves. This is a very complex area because customers are entitled to do with their money as they see fit. It is difficult, if not illegal, for banks to refuse to execute a transaction that the customer is requesting. In some cases, banks end up closing accounts of victims in order to prevent a transaction that the bank believes is a scam. In that case, the customer can simply take their funds to another bank and try to complete the transaction.
The point here is not to say that banks have no role in trying to protect customers from being defrauded in scam situations. Banks have been doing a lot to address this situation. Banks are often able to detect scam transactions, such as when an elderly person who has never sent an international wire transfer is requesting a $100,000 transfer to Asia. In some cases, banks will stop such transactions and try to question the customer about the transaction. In other cases, “trusted contact” protocols have been set up so that the bank can contact a non-account holder to discuss the situation. These types of contacts are burdensome and time-consuming, yet some banks are taking such steps to protect their customers from losing money – even though they are generally not liable to their customers for these kinds of “authorized” transactions.
Scams are usually initiated via technology firms
However, the financial institution is the last point in the fraud transaction – once the victim has been contacted, groomed, and convinced to send money to a person they have never met. In most cases, the initial contacts are made on social media platforms where fraudsters (who are often themselves victims of human trafficking) reach out to potential victims to engage them in conversations. I would imagine that anyone reading this blog has received multiple messages, texts, or phone calls trying to initiate such conversations.
Charlie Nunn, the chief executive of Lloyds Banking Group and one of Britain’s most influential bankers, said in October 2024 that social media giants needed to collaborate more with lenders to protect consumers. Mr. Nunn has joined calls for technology firms like Meta to do more to clamp down on a surge in scams originating from social media. He noted that, “80 per cent of financial fraud in the UK is occurring through the big tech companies, almost 70 per cent through one company – Meta.”
Recent Developments
Yet, until recently, social media companies, even though they are essential parts of the fraud scenario, have not been drawn into the effort to crack down on this activity. However, in the latter part of 2024, several steps have been taken to bring the social media companies into the war against fraud.
On November 21, 2024, Meta issued a blog post discussing the phenomenon known as “pig butchering” and indicating that, in the past year, it has “taken down over two million accounts linked to scam centers in Myanmar, Laos, Cambodia, the United Arab Emirates and the Philippines.” Meta further announced that, as part of the Tech Against Scams Coalition, it co-convened a Summit on Countering Online Criminal Scam Syndicates, which included international law enforcement, government officials, NGOs and tech companies that came together to discuss ways to make further progress in tackling this transnational threat.
As indicated above, on December 23, 2024, the UK’s Ofcom published its first-edition codes of practice and guidance on tackling illegal harms – such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide under the UK’s Online Safety Act. These new laws require every site and app in scope to complete an assessment to understand the risks illegal content poses to children and adults on their platform by March 16, 2025. Following that, sites and apps will need to start implementing safety measures to mitigate those risks and set out measures they can take. Some of these measures apply to all sites and apps, while others apply to larger or riskier platforms. The most important changes Ofcom expects its codes and guidance to deliver include:
• Senior accountability for safety. To ensure strict accountability, each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties.
• Better moderation, easier reporting and built-in safety tests. Tech firms will need to make sure their moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly when they become aware of it, such as illegal suicide content.
• Protecting children from sexual abuse and exploitation online. Firms will be required to take measures to prevent adult predators from grooming and sexually abusing children.
More Needs to be Done
While social media and tech companies are taking a step in the right direction, more needs to be done. For example, scammers are using bots and AI at scale to target victims on social media, but little is being done by these tech platforms to stop it. Following a study conducted at Notre Dame University in October 2024, researchers concluded that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame, who was the senior author of the study, noted that, “As computer scientists, we know how these bots are created, how they get plugged in and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn’t really be a problem.” His team looked at what the platforms state they do and then tested to see if they actually enforce their policies.
The researchers attempted to launch bots to test bot policy enforcement processes, and successfully published a benign “test” post from a bot on every platform. The researchers found that the Meta platforms were the most difficult to launch bots on. It took multiple attempts to bypass their policy enforcement mechanisms, and they were successful in launching a bot and posting a “test” post on their fourth attempt. The only other platform that presented a modest challenge was TikTok, due to the platform’s frequent use of CAPTCHAs. But three platforms, Reddit, Mastodon and X, provided no challenge at all. This led the researchers to conclude that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Brenner argued that laws, economic incentive structures, user education and technological advances are needed to protect the public from malicious bots.
Shared Liability
In a recent blog post, fraud expert Neira Jones discussed the topic of whether social media and tech companies should share in the liability for money lost by victims of scams. She concluded that, “In our increasingly digital world, financial services provision doesn’t solely rely on financial services institutions. Therefore, to enhance ecosystem safety and integrity, the accountability burden shouldn’t be placed solely on these institutions, don’t you think?” Ms. Jones also noted that there have been recent developments in the UK, Australia, and Singapore that extend some liability or responsibility to technology companies and other entities beyond traditional financial institutions. She said that the trend in shared liability “appears to be moving towards a more collaborative approach, with regulators recognising that effective fraud prevention requires cooperation between financial institutions, telecommunications providers, and tech platforms.”
Final Thoughts
As Neira Jones indicates, if victims of scams are going to be compensated for their losses, it makes sense that social media and technology companies should share in the liability. While it is understandable that victims of scams are looking to large corporations to cover their losses, that is a difficult and complex road to navigate. It would be much more productive if social media and technology companies would team up with financial institutions to seek stronger measures to prevent these frauds from taking place. The social media, technology, and financial industry have tremendous resources, both financial and technological, to meet the challenges presented by fraudsters.
Let’s all join together to win this war against fraud.