LinkedIn Teams Up with C2PA to Label AI-Generated Content for Transparency

LinkedIn has started adding labels to AI-generated content. They are doing this through a partnership with the Coalition for Content Provenance and Authenticity, or C2PA. This organization helps to track the origin of digital images by embedding code data into them. When you see an image on LinkedIn, there will now be a small tag in the top right corner. Clicking on this tag will show more details about the image.

The tagging is automatic, and it uses a system set up by C2PA. C2PA is working on creating universal standards for AI-generated images and videos. These standards include digital watermarks that can’t be easily removed. Major tech companies like Microsoft, Google, Adobe and OpenAI support these standards. TikTok, another social media platform, has also started using C2PA’s standards this month.

Adding these tags is important for several reasons. Social media platforms are trying to be more transparent about the content they host. This helps prevent the spread of misleading images, like deep fakes. While many AI-created images are harmless, some can be misleading and have serious implications. For example, fake images of a military attack or false information about conflicts can mislead the public.
This can be particularly risky during election times around the world. Misleading images can sway public opinion, even if they are later tagged as fake. The problem is that sometimes these tags are added too late, after the image has already made an impact.

That’s why it is crucial to detect and label these images quickly. The next challenge will be to make sure that everyone understands what these tags mean. The goal is to create a consistent way of reporting AI-generated content across all platforms.


Read next: Speed Showdown: Meet the Fastest Major Broadband Provider in the US
Previous Post Next Post