Meta’s Oversight Board Slams Slow Moderation of Deepfake Celebrity Ads on Platform

Meta has come under fresh scrutiny after its independent Oversight Board raised serious concerns about the company’s slow and uneven response to a growing wave of deepfake scams featuring celebrity impersonations. These fraudulent ads, many of which use AI to mimic well-known public figures, have quietly gained traction across Meta's platforms, often spreading unchecked for days — and sometimes even weeks — before any meaningful action is taken.

The latest case examined by the Board centered around an ad promoting an online gambling app, where a deepfaked version of Brazilian football legend Ronaldo Nazário appeared to endorse the game. Despite repeated reports from users — more than 50 in total — the ad remained live and continued to rack up views, crossing the 600,000 mark before it was finally removed. Even then, the original post behind the ad lingered on Meta’s platform until the Board agreed to formally review the matter, highlighting just how sluggish the enforcement process can be when AI-generated content is involved.

At the heart of the Board's findings was a fundamental issue: Meta’s review teams don’t appear to have the authority or tools to stop this kind of content early. Instead of proactively removing impersonation scams, moderators reportedly wait for escalations, often only acting once reputational damage is already done. That approach, the Board argued, leaves the door wide open for scammers to exploit loopholes and game the system.
The problem isn’t confined to a single post. The Board noted that Meta’s own ad library contains thousands of videos promoting the same app, many featuring doctored clips of other public figures — including Cristiano Ronaldo and even Meta’s CEO, Mark Zuckerberg. If nothing else, the scale of the issue makes clear that this isn’t just the odd bad apple; it’s a systemic pattern that’s proving tough to stamp out.

Although Meta says it’s experimenting with facial recognition technology to catch this kind of manipulation, the Board believes that isn’t enough. Without clear internal rules, well-trained moderators, and a faster response when harm is reported, the company risks looking the other way as bad actors continue to run amok. One hand seems unaware of what the other is doing, and in the meantime, scammers are walking through the cracks in the system.

Public frustration is mounting. In recent months, several celebrities — including Jamie Lee Curtis — have taken to social media to call out fake ads using their likenesses. Even when these posts are reported, they often stay up long enough to go viral, suggesting Meta’s tools either aren’t up to the task or aren’t being used properly. The result is a growing sense that the platform’s guardrails are too easily bent — or simply missing when they’re most needed.

The picture looks no better when you step back. Reports have found that Meta-linked platforms were involved in nearly half of all scams reported on Zelle at JPMorgan Chase over a recent 12-month period. Authorities in the UK and Australia have observed similar trends, adding to the impression that Meta’s enforcement approach is out of step with the threat it faces.

Even with warnings piling up, the company remains hesitant to tighten the screws on its ad system. Critics suggest Meta is reluctant to introduce friction in the ad-buying process — wary, perhaps, of scaring off high-volume advertisers, even those with a questionable track record. It’s a classic case of balancing profit against protection — and right now, many observers say the scales are tipped the wrong way.

The Oversight Board has urged Meta to retrain its staff, revise its internal playbook, and give moderators clearer authority to take down content without waiting for formal escalation. Whether that advice will stick remains to be seen. For now, though, it’s clear that as deepfake technology becomes easier to use and harder to spot, the company’s current approach is struggling to keep up with a fast-moving threat.


Image: DIW-Aigen

Read next: Virtual iPhone Maker Corellium Bought by Cellebrite for Advanced Investigations
Previous Post Next Post