Meta Faces Criticism for Approving Controversial Ads in India

Meta is under scrutiny once again for its ad approval process. A non profit watchdog called Eko tested Meta’s systems by submitting 22 ads that contained harmful content. Despite breaking Meta’s rules, 14 of these ads were approved. These ads, aimed at audiences in India, promoted violence and spread false information about the elections. They encouraged attacks on Muslim minorities and pushed Hindu supremacist ideas.

The ads were never shown to Facebook users because Eko removed them after testing Meta’s system. This is not the first time Eko has highlighted issues with Meta’s advertising. Previously, the group managed to get approval for harmful ads targeting European users. These were also not displayed to the public.

Eko uses these tests to show weaknesses in Meta’s ad monitoring. In their latest test, Eko also used AI to create images for the ads. Meta did not flag any of these ads as AI- generated, even though the company has been working on tools to identify such content.

Meta has responded by referring to their rules that require political advertisers to reveal if they use AI. They also mentioned efforts to improve their systems before the elections in India. However, Meta did not provide a direct comment on the recent findings by Eko. This situation highlights ongoing challenges in managing digital advertising, especially when it involves sensitive topics like elections and social tensions.

Meta approves 14 ads calling for the killing of Muslims, execution of a key opposition party leader, and pushing stop-the-steal-style narratives during the official election 'silence' period.
Image: DIW-Aigen

Read next: LinkedIn Teams Up with C2PA to Label AI-Generated Content for Transparency
Previous Post Next Post