AI-Generated Images Are Spreading A Wave Of Disinformation As They Fool AI Detection Software Easily

Images produced using AI technology have been the center of concern for so many individuals around the globe.

Be it stolen pictures, artwork, or fake marketing campaigns- they’re the root cause of a rapidly spreading disinformation wave online. And with months passing and no solution being outlined, people are getting worried, and rightly so.

The news comes to us thanks to a recently published report by the New York Times that states how software that was designed to detect such ordeals is now easily fooled. Yes, one of the leading forms of defense against this misinformation spread is getting tricked by the simple addition of grain on pictures produced by AI technology.

The report further went on to elaborate upon how the addition of grain by the image’s editor or shall we say texture would cause the picture produced by AI to be less discernible. We’re talking about a decline in detection figures that go from 99% to only 3%. How’s that for some shocking news?

What is even more appalling is how one popular and sought-after software called Hive is also having trouble despite previous studies showing it had a huge rate of success in the past for AI detection.

Hive cannot differentiate between regular images and those produced using AI technology, especially when the owners of the images turn them into a more pixelated picture.

As a result of this, experts claim that such software shouldn’t be the only means for detection as so many companies are working hard to get rid of misinformation and stop the publication and release of images along these lines.

It’s like robbing people of their hard work and talent and we don’t know how that can ever be acceptable for obvious reasons. One expert from Duke University who happens to know the software in and out says that each time a person creates great generators, others tend to come forward and create even better discriminators. And then the latter uses the discriminator to create a bigger generator.

The news comes at a period when we are seeing users put out new kinds of misinformation that are made through AI technology. The idea behind that is to set forward political campaigns that would influence the minds of the general public in a deceiving manner which is obviously wrong.

One of the greatest and most recent examples has to do with Florida Governor Ron DeSantis and his recent announcement of running for president in the upcoming elections.

That particular campaign sent out fake pictures of former US President Trump.


Read next: The rise of Generative AI: A game-changer for digital professionals
Previous Post Next Post