Companies Urged To Take Action As New Study Proves AI-Image Generators Are Trained On Kids’ Explicit Images

The power of top-of-the-line AI image generators is undeniable but a new study has companies generating the alarm after shocking findings.

This includes a deep dark dark secret related to some of the world’s most popular AI image generators being trained on thousands of explicit images belonging to underage users.

The new report has generated a wave of concern in the industry, urging authorities to step in and take necessary action before it’s too late. Moreover, the incident is being dubbed a huge flaw and in the end, explicit pictures of kids are transforming the realm of social media apps.

The images range in terms of explicit nature, some featuring them in full attire while others have them wearing nothing at all. After receiving the necessary training, it’s remarkable how the AI image generators are finding it so much easier to take pictures of a similar kind, leading law enforcement agencies to step in and issue alerts of the incidents arising around the globe.

Researchers claim the only manner by which all of this keeps going unnoticed has to do with AI tools being left to do whatever they may please with a lack of regulation of the inside mechanisms for functioning. The goal is to combine the dark realms of adult content with those of young and innocent kids without anyone realizing the truth of the matter.

The news comes to us thanks to new research by the Stanford Internet Observatory that reported how 3200 pictures linked to possible child abuse were seen on LAION’s AI database.

For those who might not be aware, the above-mentioned is one of the most popular indexes out there today in terms of online pictures with captions that are used for AI training purposes.

To get the final results of the study before it was published, the popular watchdog team located at the prestigious Stanford University was working side by side with a Child Protection group in Canada as well as a host of other leading non-profit companies to highlight how the material used was illegal and done without any sort of consent.

The material is undoubtedly illegal and now reports are being generated to the police and necessary entities where close to 1000 pictures were validated through external means.

The response generated in this regard was swift and done by Wednesday evening, as soon as Stanford’s report came into being. On that note, LAION was quick to mention how it removed the data, even if it was temporarily.

The company went on to highlight through a public post how they have no tolerance for such acts and illegal material. There is so much caution being taken to avoid such events in the future with stricter safeguards in place. But the fact that all of this was done so easily makes one wonder what sort of check and balance such image generators have and if any regulation is actually taken on to prevent pictures from being republished.

While it’s true that the index belonging to LAION is huge, close to 5.8 billion pictures were highlighted by the group and how it continues to influence some leading AI tools to produce harmful results is certainly concerning. These are real victims who have gone through so much and the last thing the world needs is to see them put through this kind of torment again.

LAION is the product of a researcher who hails from Germany, Christoph Laion. He went public during the year’s start when he mentioned how big the firm’s database was and how visually appealing it was for the masses seeking to access images easily without the worry of big tech giants controlling what was available online.

A lot of the company’s material arises from the likes of Common Crawl - another leading source for repository data that again takes material from today’s open market. But seeing the company fail immensely in terms of scanning and filtering is definitely worrisome.

Read next: Gen Z Users Dominate TikTok and Snapchat, Avoid Facebook
Previous Post Next Post