When billionaire Dutch TV producer John de Mol sued Facebook in 2019 over its alleged failure to stop scammers from using his image in deceptive ads, the social media company sent Rob Leathern to Amsterdam to meet with Del Mol’s team and to speak with the media.
“The people who push these kinds of ads are persistent, they are well funded, and they are constantly evolving their deceptive tactics to get around our systems,” Leathern told Reuters at the time.
During his four years at the company now known as Meta, Leathern was in many ways the public face of its effort to fight scam ads. He led the business integrity unit tasked with preventing scammers and other bad actors from abusing Meta’s ad products. He regularly spoke to the media about scam ads. Leathern also oversaw transparency efforts like the Meta Ad Library, the industry’s first free and searchable repository of digital ads, and the launch of identity verification for political advertisers.
But since leaving Meta at the end of 2020, Leathern has watched as criminals deployed deepfakes and used artificial intelligence to craft more convincing scam ads. He said he became alarmed as major platforms failed to invest in teams and technology at the rate needed to fight such exploitative ads.
“The technology and the progress has stagnated the last five years,” Leathern said in an interview. “I also feel like we just don’t really know how bad it’s gotten or what the current state is. We don’t have objective ways of knowing.”
Leathern has teamed up with Rob Goldman, Meta’s former vice president of ads, to launch CollectiveMetrics.org, a nonprofit aimed at bringing more transparency to digital advertising in order to fight deceptive ads. The goal is to use data and analysis to measure things such as prevalence of online scam ads and to lift the veil on the opaque ad systems that generate hundreds of billions of dollars in revenue for companies like Meta.
Their effort comes as losses due to scams have skyrocketed around the world. The Global Anti-Scam Alliance, an organization that researches scam trends and includes leaders from Meta, Google, and other platforms on its advisory board, estimates that victims collectively lost at least a trillion dollars last year. Its 2025 Global State of Scams report found that 23 percent of people have lost money to a scam.
The report said that many victims fail to report scams due to feeling ashamed or because they don’t know who to tell. Of those who did report a scam, more than a third said that “no action was taken by the platform after reporting it.”
Leathern said that it’s impossible to know exactly how many scam ads there are on platforms like Facebook and YouTube because the companies don’t make data accessible for independent research.
“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern said. “We’d like to move to actual measurement of the problem and help foster an understanding.”
As a first step, they commissioned an online survey of 1,000 American adults to gauge how consumers view efforts by platforms to fight deepfakes and scam ads. Almost half of people (47 percent) said that TikTok is doing a poor or very poor job, the highest of the platforms polled. Facebook and Instagram were the next worst. Thirty-eight percent of respondents said Facebook was poor or very poor at preventing deepfakes and scam ads, while 33 percent of people said the same of Instagram. People over 55 had the most negative view of TikTok and Meta’s efforts, with 61 percent saying that TikTok does a poor or very poor job, and 47 percent and 43 percent saying the same of Facebook and Instagram.
The low numbers for TikTok and for two Meta products suggests that consumers have an overall negative perception of the companies’ anti-scam efforts, according to Leathern.
“People seem quite more negative than I would have expected,” he said.
He added: “There’s been a loss of institutional knowledge at some of these companies. I just think we’re in for a hard time, and I don’t see the mechanisms in place for much accountability yet.” (Leathern’s wife currently works in product marketing at Meta.)
Melanie Bosselait, a TikTok spokesperson, said in an email that the company’s Community Guidelines prohibit “attempts to scam, trick or defraud people.” TikTok also offers educational resources including an article entitled, “How We Fight Scams and Fraud on TikTok.” Bosselait said that TikTok uses a mix of automated and human systems to enforce its rules, and that it regularly reviews and strengthens such systems.
Meta spokesperson Daniel Roberts said the company has continued to invest in fighting scams since Leathern left the company.
“We aggressively fight scams on our platforms, and as scammers have grown in sophistication in recent years, so have our efforts,” Roberts said in an emailed statement. “In fact, since this former employee left Meta a half-decade ago, we have expanded our multi-layered approach to combatting scams by launching global awareness campaigns that help people spot scams, collaborating with cross-industry partners to disrupt these networks, and rolling out facial recognition technology to detect and remove celeb-bait ads.”
Roberts said that Meta has seen a more than 50 percent decline in user reports about scam ads since the summer of 2024, and removed more than 134 million scam ads this year.
Meta is currently being sued in California by Australian billionaire Andrew Forrest, who alleges that the company’s automated ad systems assisted investment scammers in placing ads that impersonated him. In a court filing, Meta disclosed that it had hosted roughly 230,000 scam ads that featured Forrest’s likeness since 2019.
An October report from the Tech Transparency Project found that Meta has recently earned at least $49 million from scam advertisers that often used deepfakes of public figures like Donald Trump, Elon Musk, and Alexandria Ocasio-Cortez
Leathern said one potential reason that scam ads are still widespread on platforms is that the companies worry that “too much good revenue will get flushed out if they are more aggressive about getting rid of the bad.”
Roberts disagreed.
“We fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t either,” he said. “That’s why we’re always looking for new ways to stop them and take them down.”
CollectiveMetrics.org’s survey data shows that consumers generally believe that platforms and governments have a responsibility to prevent scam ads. But only 36 percent of respondents said digital platforms are doing a very or somewhat good job fighting deepfakes and scam ads.
“Consumers in the US definitely expect both tech companies and the government to help protect them from the potential negative effects of deepfakes,” Leathern said. “And also they don’t feel like platforms are doing a great job yet in terms of preventing scams and deepfakes.”
Just under 50 percent of respondents aged 18 to 54 said it’s “very important” for the government to pass laws to stop deepfake ads. People over 55 were even more supportive of government action, with 65 percent saying it’s very important.
Sixty-seven percent of respondents aged 55 and older said it was “very important for online platforms to prevent fraudulent ads,” compared to 55 percent of those aged 54 and under.
“I think the older users are disproportionately getting targeted by scams and problematic offers,” Leathern said.
The survey showed that people think TikTok and Meta are doing the worst job preventing deepfake scam ads. But Leathern said we lack real data to understand how such platforms are actually performing.
“Let’s have some independent third parties be able to look at whether you have more fraud and scams than YouTube does. Because, look, I’ve worked at both Google and at Meta, and people tell me all the time, the ads on Google ads are terrible,” said Leathern, who worked on privacy products at Google from 2021 to 2023. “I’d love to have that conversation with real data.”
The challenge is that it’s currently impossible for researchers, governments, and other third parties to fully assess the performance of platforms. Even the Digital Services Act in the European Union, which mandates additional data transparency and reporting by major platforms, hasn’t resulted in the kind of data that’s needed to perform large-scale audits of ads and advertisers, according to Leathern.
“I think it’s super well intentioned,” he said of the DSA. “I think that they aren’t necessarily requiring the right metrics to be surfaced or the right information to be provided to the public. So I think those laws need to evolve.”
Leathern said that the ideal scenario is for platforms to see scam prevention as a competitive advantage and to protect users by investing in new features and systems. He recently proposed that platforms should notify users when they clicked on an ad that was later removed for violating policies against scams and fraud.
“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said.
Leathern also said that platforms should have to donate or otherwise disgorge the money earned from scam ads placed via their systems. As of today, Meta, Google, TikTok, and other companies remove scam ads but keep the money that was spent to run them.
“It certainly shouldn’t necessarily be enriching companies if there’s scammy ads being run,” he said. “The revenue could also be used in other ways to fund nonprofits to educate people about how to recognize these kinds of scams or problems. There’s lots that could be done with funds that come from these bad guys.”




