YouTube Introduces Tool for Labeling AI-Generated Videos

YouTube has launched a new feature that lets video creators tell viewers if their content has been made or changed by AI. When uploading videos, creators can now check a box to say if their video includes AI-made or changed scenes that look real. This is for videos that might show things like a person saying something they didn't really say, changes to real footage, or completely made-up scenes that seem true, such as a fake storm hitting a town or using AI to mimic someone's voice in narration.

However, YouTube won't ask for this label on videos using simple effects like beauty filters, background blur, or cartoons. In November, YouTube explained its rules for AI videos. These rules are stricter for music and more flexible for other content. For example, if there's a song made to sound like a singer singing something new, the singer's record label can ask to remove it. YouTube also mentioned that people who find themselves faked in videos will have a tougher time getting those videos removed unless they go through a privacy complaint process. YouTube is still working on making this process better.

The system relies on creators being honest about their use of AI in videos. YouTube has said it's trying to find ways to spot AI-made content, but finding AI isn't always accurate.

YouTube also said that it might add an AI warning to videos itself if it thinks the video could trick viewers. This is more likely for videos about important issues like health, elections, or money.

Read next: Elon Musk Clashes With Don Lemon In Fiery Interview While Defending Free Speech And Diversity
Previous Post Next Post