A Recent Study Sheds Light on AI Hallucinations and What Society Thinks of Them

Tidio recently conducted research to assess public opinions regarding AI hallucinations. It tackles people’s experiences with the issue and their fears and expectations as well as the history and background of AI hallucinations and the tips to spot and avoid the problem. The results present an interesting mix of reasonable caution and big trust in AI nonetheless.

AI hallucinations are best defined as cases when large language models (known as LLMs) generate false information but present it as authentic and genuine. As AI tools powered by LLMs are becoming more advanced, the problem of AI hallucinations is becoming more widespread. It’s definitely not a new issue: AI hallucinations are as old as AI tools themselves (and the first ones appeared between 1950 and 1956). The first mention of the term “AI hallucination” appeared in a research paper in 2000, and Google DeepMind resurfaced the term in 2018. Of course, the concept peaked in 2022, when ChatGPT went public and more and more people got acquainted with LLMs and how they work.

How is the situation now? Well, AI does hallucinate a lot. AI-generated false information comes in many different forms. Some examples of AI hallucination types include:
  • Prompt contradictions, when LLMs give a response unrelated to the original prompt;
  • Sentence contradictions, when sentences generated by AI contradict each other, often as an answer to the same prompt;
  • Factual contradictions, when AI generates misleading and false information presenting it as correct;
  • Calculation mistakes, when the prompt is an equation or any type of a math problem, and AI doesn’t get it right;
  • Sources contradictions, when AI gives sources and references for something that turn out to be fake;
There are many other different things that can go wrong when it comes to responses given by artificial intelligence. What are the reasons for these mishaps? Well, there are plenty. According to survey respondents, the most responsibility for AI hallucinations lies on users who write prompts and governments who want to push their agenda. Quite an interesting (also a bit worrisome) perception.

If you dig deeper into the topic of the reasons for AI hallucinations, it’s really far from simple. However, there are definitely some factors that cause AI to hallucinate more often than not. One of them is the insufficiency of data that a particular LLM was trained on. It’s impossible to cover everything, especially when it comes to text recognition with all its nuances and unique patterns. Hence, AI struggles to fill in the gaps and hallucinates. Another common issue with LLM is an inaccurate output for new training data. It means that while AI perfectly manages with the data it was trained on, it cannot perform well on new, unseen data. Finally, a very widespread reason for AI hallucinations is when prompts get encoded wrong. Each term is mapped to numbers by AI, which is known as vector encoding. If something goes wrong during the encoding and decoding process, AI tools end up hallucinating.

One might think that the problem is overestimated. In the end, so many people use tools like ChatGPT or Bard, and nothing bad happened, right? That’s where it might be interesting to take a look at Tidio’s research. Conducting a survey on AI hallucinations, Tidio gathered insights from a diverse group of 974 participants of various ages and backgrounds. These individuals were recruited from Reddit through the Amazon Mechanical Turk platform, and they were asked specific questions to understand public opinions about the scale and impact of AI hallucinations.

According to the survey, as many as 96% of the sample population know what AI hallucinations are, and as many as 86% have personally experienced them. What’s more, around 77% have been misled by the information provided by AI. Even though the numbers of people affected are high, there is still a lot of trust in AI: as many as 72% stated that they trust AI to provide truthful data. But how do people know that something they are seeing is an AI hallucination? The majority of the sample (52%) cross-reference the information with other resources, which is great news and shows that the users are aware of the issue. At the same time, almost a third of the respondents (32%) rely exclusively on their own instincts when assessing if they can trust AI.

Do people worry about AI hallucinations and the future? Definitely, yes. The survey respondents are worried about misinformation, privacy risks, and even elections manipulations and brainwashing of society when thinking of the consequences of the issue. While the latter are quite unlikely in the near future, the spread of misinformation remains a concern. About half of the respondents would like to see improved user education about AI and stronger regulations and guidelines for developers. It seems like these are the good next steps.

So, there are many different experiences and opinions about AI hallucinations. People seem to be quite worried, but they also don’t lose their trust in AI. Let’s hope the industry will not let them down.

Take a look at the below infographics for more insights:










Read next: The Semiconductor Industry Receives First Quarterly Revenue Uptick in 2 Years Thanks to AI
Previous Post Next Post