UK Children Are Using Generative AI to Indecent Images of Classmates, Internet Safety Group Warns

The rise of generative AI has brought both negative and positive things to the table, but in spite of the fact that this is the case, it appears that children in the UK are using it to break the law without even realizing it. The UK Safer Internet Center has warned that school children based in the British Isles are utilizing AI to create explicit images of their classmates according to reports from teachers.

The main issue here is that children might not be aware that what they are doing is against the law. Even if the images aren’t created with malicious intent, they are dangerous because of the fact that this is the sort of thing that could potentially end up getting in the hands of sex offenders. If these images are posted to the internet, it would be all too easy for them to be disseminated widely, and that would make obtaining explicit images of children far easier for sex offenders than might have been the case otherwise.

With all of that having been said and now out of the way, it is important to note that child sexual abuse imagery has become incredibly realistic when produced by AI. This means that it will be ever more difficult to distinguish between legitimate CSA material and images that were generated through AI, which makes the likelihood of children generating these images themselves concerning to say the least. This information comes from the Internet Watch Foundation based on a report they released in October of this year.
One thing that bears mentioning is that the law doesn’t just cover realistic imagery, it also extends to cartoon depictions of CSA, which makes it essential to inform children about the dangers of taking part in these practices. They may not be aware that they are not only in violation of legal codes, but they are also endangering their fellow children.

The director of UKSIC, David Wright, has said that though young people are unaware of how serious their actions are, such usage must be predicted whenever new technology comes to the fore. It will be essential to crack down on CSA material generation, and the companies behind various generative AI platforms must themselves take the initiative to prevent such usage in the first place.

Photo: DIW - AIgen

Read next: Instagram’s Advertising In Trouble As Brands Suspend Contracts After Finding Ads Placed Near Explicit Content
Previous Post Next Post