Google, Microsoft, OpenAI, And Meta Join Hands To Stop Their AI Tools From Exploiting Children

Top AI firms have opted to join hands in a new promise to better protect the security and privacy of children online.

From OpenAI and Meta to Microsoft, Google, and beyond, the companies have jointly taken part in a new pledge where they vow to ensure their AI tools do not engage in child exploitation and the generation of abusive material about underage users.

The news began after the idea was rolled out by a leading children’s activist group called Thorn and All Tech is Human who feel it’s high time companies behaved responsibly on this front.

The promises from these respective AI organizations have now given rise to a massive precedent for this sector and therefore now represent a major leap in the efforts used to protect kids from such harmful instances as the world of generative AI continues to evolve as we speak.

The main goal here is to stop the production of explicit content involving young kids and allow it to take off across different apps and search engines that are commonly used today.

In case you didn’t know, we’ve got close to 104 million documents featuring CSAM that were reported last year alone. So to not have this kind of action can only make matters worse as generative AI is set to take over the world and make issues worse than what they already appear.

It would seemingly overwhelm plenty of law enforcement firms along the way who are working hard to unveil the real identity of genuine victims so this would further provide hurdles in that regard.

On Tuesday, we saw the tech giant mention through the latest paper dubbed ‘Safety by Design for Generative AI’ how various strategies need to be recalled and therefore it gave rise to new recommendations that create AI tools and different platforms in a manner that’s safe to use and therefore wouldn’t serve as harm to kids.

As a part of that endeavor, one recommendation linked to firms requesting data sets to better train AI models and leave out those linked to CSAM and adult material.

It’s not uncommon to see both of these explicit themes combined as one so to tackle the matter, they both need to be left out as proven by experts.

In the same way, Thorn is forcing top social media apps to get rid of website links and platforms that enable nude image promotion of kids. That would give rise to more AI-produced images of children and abusive material thanks to plenty of templates available online.

As per the paper published, there’s a plethora of AI-produced abusive material that makes identifying victims of child abuse so much more difficult by giving rise to new haystack issues. This was a reference to the type of content law enforcement entities would be required to sift through.

Such projects are designed to make it clear to the world that you don’t really need to put your hands up but actually make some serious changes to tech and the places where it’s causing major harm.

Meanwhile, some of these leading tech giants have even agreed to put some pictures aside including both audio and video featuring kids from such data sets entailing adult content. This would stop models from featuring a combination of the two content.

Furthermore, quite a few wish to include watermarks to better highlight content produced through AI but we must inform you that such means aren’t foolproof as they can be erased easily.

Image: DIW-Aigen

Read next: Google's Controversial Purge: Employees Sacked for Anti-Israel Stance! Is Big Tech Suppressing Voices?
Previous Post Next Post