Unlock the Editor’s Digest for free
Online scams are big business. In the EU, according to the most recent figures, online scammers defrauded consumers out of €4.3bn in 2022. Increasingly, they use sophisticated adverts, including AI-generated “deepfakes” of figures ranging from Elon Musk to the UK personal finance expert Martin Lewis, to lure individuals into disclosing personal data or investing in fraudulent schemes. The vehicle is often social media platforms, which profit indirectly from carrying the ads. No business, least of all some of the world’s most powerful, should be able to profit from fraud on this scale.
Though mechanisms are improving for reimbursing victims, generally by the banking sector, the harm done by such frauds is huge. It includes not just the immediate losses and stress to victims and their banks, but also the erosion of trust in respectable sources of information and the financial industry.
Getting fraudulent material taken down, however, can be a game of “whack a mole” — as the Financial Times discovered when deepfake ads were found on Meta platforms apparently showing its columnist Martin Wolf promoting fraudulent investments. The FT has established that these fakes were seen by millions of users; many may have lost money as a result. As soon as one ad was removed, others popped up from different accounts, with Meta’s systems seemingly unable to keep up, though they do now seem to have been stopped.
Circulation of fraudulent, indeed criminal, material cannot be justified. Given how hard it is to stamp out advertising after the fact, though, this is a case where prevention is better than cure. Social media should have a legal duty not to provide ad space to fraudsters in the first place. They ought to be expected to “know their customers” and be held liable, with proper enforcement and tough penalties, if they fail to block dissemination of fraudulent ads.
The EU is considering legislation on those lines. Member states are discussing proposals from Brussels to introduce a right to automatic reimbursement from PayPal, Visa, Mastercard and banks for customers defrauded by scammers. But an amendment submitted by the Irish finance ministry, and gaining traction in other EU capitals, would go further — by legally requiring online platforms to check that an advertiser is authorised by a regulator to sell financial services, and block it if not.
Brussels frets that the amendment would conflict with a provision in the EU’s Digital Services Act that online platforms are not required to conduct broad-based monitoring of content. There may be squeamishness over antagonising Donald Trump, who wants to defang EU regulation of US tech firms.
Yet having to verify whether financial advertisers are authorised does not constitute large-scale monitoring, and would only be required of very large online platforms or search engines. Some already do it, or have committed to: Google has a financial services certification programme in 17 countries, while Meta agreed with the UK’s Financial Conduct Authority in 2022 to ban financial ads by firms not registered with the regulator. And the EU should prioritise robust consumer protection over the protestations of the US president and his Big tech backers.
A legal obligation to verify financial advertisers would not address the wider problem of celebrity deepfakes being used in scams and promotions linked to products ranging from cookware sets to dental products. But the fact that sellers of financial products must usually be registered with regulators opens a route to blocking a particularly harmful online fraud. The EU, and the UK, should set an example to other jurisdictions and take action now.