Experts Issue Alarm As New Studies Prove ChatGPT And Google’s Bard Can Be Easily Led Astray

The power of generative AI has really taken the tech world in a new direction. And while the benefits are plenty, researchers are now warning about the huge drawbacks involved.

The latest studies are hinting at how easily chatbots like OpenAI’s ChatGPT and Google’s Bard can be led astray. Therefore, they cannot be trusted.

The duo has really charmed so many users around the world, adding ease to various tasks. But thanks to the findings of two latest studies, the picture is not as pretty as that being printed in the public eye.

The findings explained how the chatbots continue to be super prone to throwing out data that is filled with misinformation while there are so many conspiracy theories involved as well.

The information set out by NewsGuard is known to rate how credible data and information are in the world around us. And seeing them test out Google’s latest AI chatbot called Bard was certainly an experiment that many were keen on seeing.

The site carried out the test by feeling nearly 100 fake details and requesting the chatbot to produce content surrounding those facts. And to many people’s surprise, it actually did.

Yes, we’re talking about nearly 76 different essays that had to do with nothing but misinformation.

The performance seen here was so much better than that witnessed across OpenAI’s models of ChatGPT. This is because, at the start of 2023, NewsGuard revealed how the tool produced nearly 80 out of those 100 false narratives that were fed into it. And wait, the drama does not stop there. We also saw NewsGuard mentioning how it was done in a much more persuasive fashion so if you didn’t even believe it, you’d be forced to, in the end.

The report is now getting back up from another latest research conducted by Fortune. It revealed how Google’s Chatbot Bard can be influenced so easily. The guardrails are overcome easily, allowing it to generate fake content online.

Nearly 78 out of the 100 narratives produced by Bard were fake and very harmful, filled with so much misinformation. They included categories like climate change theories and even vaccine misinformation.

We agree that both OpenAI and Google have never called out their products as being perfect and have even claimed that they do have a certain margin of error which users should be mindful of. But this is just beyond acceptable.

Today, it has a range of safety controls involved that Google says are built in. It similarly highlighted how there are some clear mechanisms involved that can give feedback that is aligned with the firm’s AI principles.

Furthermore, we’ve seen similar statements coming out from OpenAI regarding its ChatGPT tool. They claim it might be incorrect and untruthful, causing users to be easily misled.

But the fact that such tools are being used so frequently for a wide range of tasks definitely means someone needs to step in and do something about it before it's too late. What do you think?


Read next: This New Study Highlights The Trending State Of Creative Team Collaboration In 2023
Previous Post Next Post