Racist AI Videos Created With Google’s Veo 3 Spread on TikTok

Short AI-generated clips made with Google’s Veo 3 are circulating widely on TikTok, racking up millions of views despite clearly violating platform rules. These videos, discovered by the nonprofit watchdog Media Matters, include racist and antisemitic imagery. Most focus on dehumanizing portrayals of Black people, immigrants, and Jewish and Asian communities.


Each video runs for eight seconds or less and many carry the “Veo” watermark, identifying Google’s AI video tool as the source. In several cases, hashtags and usernames referenced either AI or Veo directly.

A Model Meant for Creativity, Misused for Harm

Google launched Veo 3 in May, presenting it as a major step forward in AI video creation. The tool allows users to turn text prompts into realistic clips and audio. According to its documentation, it’s supposed to block content that promotes violence, abuse, or discrimination.

Yet videos flagged by Media Matters managed to slip through. Some recreated long-standing racist tropes, including primate imagery and cultural stereotypes, with unsettling realism. Independent tests by journalists found that Veo 3 could easily generate similar scenes using simple prompts, suggesting that its safety filters may not be catching enough.

TikTok’s Enforcement Struggles to Keep Pace

TikTok prohibits hate speech and says it uses both technology and human reviewers to remove harmful content. Still, the speed and volume of uploads make full enforcement difficult. By the time Media Matters released its findings, many of the offending accounts had already been banned. Others were taken down after the report surfaced.

TikTok confirmed that the accounts listed in the report have now been removed, though not before their videos attracted millions of views.

The Problem Isn’t Limited to One Platform

While TikTok remains the main platform where these clips spread, similar videos were spotted on YouTube and Instagram. Some gained traction on X (formerly Twitter), where looser moderation rules have allowed more harmful AI content to remain online.

The risk may grow. Google plans to integrate Veo 3 into YouTube Shorts, which could make it easier for users to generate and upload the same kind of offensive content on another major platform.

Rules Are in Place. Enforcement Isn’t.

Both Google and TikTok have written policies that ban hateful content. But the practical enforcement of those rules has fallen short. AI tools like Veo 3 weren’t built to understand the nuances behind racial imagery or historical stereotypes, which makes it easier for users to bypass restrictions using vague or coded prompts.

These failures aren’t new. AI systems have long been used to spread offensive content. What’s changed is the level of visual realism and how quickly that content can go viral. And while the companies involved continue to talk about their guardrails, those protections don’t always work in practice.

Note: This post was edited/created using GenAI tools.

Read next: Court Slams Google Over Hidden Android Tracking, Orders $314.6 Million Payout
Previous Post Next Post