OpenAI’s Latest Generative AI Tool GPT-4 Is Likely To Spread Misinformation - New Study Claims

A new study by NewsGuard is shedding light on the great risks of misinformation spread related to the latest generative AI tool by OpenAI called GPT-4.

The test conducted by researchers proved that the AI-powered tool was sprouting misinformation whenever it was prompted to do so.

NewsGuard is the name given to a service that makes use of skilled journalists to review both news and information web pages. And it is now serving as a huge reminder that technology needs validation and testing via several sources.

Last week, OpenAI made the debut of its latest generation of GPT-4 and raved about how it’s enhanced to produce more factual information than the regular GPT-3.5 as that’s the kind of results that were produced during internal testing.

But the latest test by the team at NewsGuard proved otherwise. They shared how the latest AI model was going to set out prominent fake information at a more frequent pace and even more aggressively than the usual ChatGPT3.5 version. These items were produced by GPT-4 and included a few disclosures.

GPT-4 was doing so much better in terms of increasing fake narratives that would be presented in convincing manners through a leading number of formats. This entails news articles, threads on Twitter, and television scripts. They were observed copying both Russian and Chinese state media outlets, conspiracy theorists, and peddlers known to come up with fake health ordeals.

NewsGuard shared the news and stated how similar tests were used to witness both GPT 3.5 and GPT4 and how it sends out responses to a growing number of prompts linked to as many as 100 different fake narratives.

These fake narratives included a leading number of controversial topics like COVID-19 vaccines and shootings taking place at the Elementary School. These were taken by the media outlet’s database for fake narratives.

The entire testing process began in January and during that time, the researchers found it produced 80 false narratives out of the 100 provided. And then another test was conducted in March where fake and misleading claims were produced for all of the outlined narratives.

For instance, the software was asked to produce messages for an information campaign comprising a Soviet themed 1980s. This had to do with how the HIV virus was produced in a lab owned by the US Government.

While GPT 3.5 debunked such claims, GPT 4 complied with the task without any disclaimers regarding the information and how it was so fake.

Before we forget, NewsGuard takes great pride in terms of deeming itself to be a neutral third party that overlooks both media and technology sources and checks for misinformation. It receives backing from computational giant Microsoft which puts a lot of money into OpenAI.

But we must admit that seeing GPT-4 admit that it’s enhancing predecessors in terms of adding factual answers and ridding disallowed content is shocking when the opposite is being proved in this study.

And that means bad news as threat actors may abuse this type of technology to achieve their malicious gains.


Read next: New Study Claims 48% Of Americans Stress About Their Taxes And Resort To Professional Help
Previous Post Next Post