New Study Says AI Tools Like ChatGPT Could Influence People’s Response To Moral Issues Like Life And Death

A new study is delving down deep into understanding how powerful of an impact AI technology can have on users’ minds.

Tools like ChatGPT are actually proving to influence users’ minds regarding crucial dilemmas.

The findings come to us thanks to research conducted in Germany where authors found those people who read one side of an argument linked to AI were likely to be persuaded. This is despite the fact that they knew opinions were arising from a chatbot and not a real human mind.

The researchers surveyed around 760 Americans regarding moral issues. These had read statements produced by ChatGPT. And then we saw participants of the study siding with the chatbot’s arguments. This happened to be true when researchers gave credit to statements produced by the AI chatbot.

The trial similarly revealed how participants could have underestimated this huge influence while making moral decisions. This is when the authors warned how there was a clear for things like education to make people understand how important AI is and what sort of effect it has on society.

Moreover, this study was put out in a journal called Scientific Reports. It included researchers arising from the southern part of the country. They all questioned the chatbot on multiple occasions whether or not they found it ok to take the life of another person if they had to save the lives of several others.

Today, ChatGPT is really making it big around the globe. Launched late last year, it’s mindblowing to see its pivotal role across various people’s lives. Powered by OpenAI, it’s getting more fame around the globe.

With that being said, some individuals do not feel AI is persuading them. The team of researchers included senior names who deduced how the tool managed to generate arguments in favor as well as against the chatbot.

It did not feel it was biased as it had some strengths for both sides of the argument and not just one. The statements were put out in front of the study’s 767 participants. And every individual had one of two dilemmas that forced them to select if they would save five while sacrificing one.

After putting out their own replies on the matter, the individuals in the study were quizzed about whether or not the AI chatbot influenced them. And it was shocking to see how participants agreed to sacrifice one individual for five or not depending on the statement put out in front of them.



Read next: Experts Issue Alarm As New Studies Prove ChatGPT And Google’s Bard Can Be Easily Led Astray
Previous Post Next Post