Experts Say that Hallucination Problems in AI Chatbots Is Nothing to be Concerned About

AI chatbots often show signs of hallucination where they confidently answer the questions but they don’t have much truth behind it. They mostly do this to make the users happy but it doesn’t always result in that. This may not seem as a big problem to some people but it is a huge problem that needs to have a solution. Some experts even say that there is no solution for this hallucination and it isn’t as scary as most people make it out to be.

Andrej Karpathy, co-founder of OpenAI and former senior director of AI at Tesla has shared his views about this issue on X/Twitter. He says that hallucination is a part of LLMs. You cannot separate their hallucination from them. According to Karpathy, these models can be seen as "dream machines” who imagine different information and scenarios. However, are these hallucinations of AI chatbots like Bing, Bard, or ChatGPT truly a matter of concern? Andrej says that users play a directing role when working with AI chatbots. They act as directors and provide prompts to AI and ask them to perform a certain action for them.

Karpathy says that the term "hallucination" is only applied when the content generated by AI is considered factually incorrect. He elaborates that what might seem like a fault is just a feature of many large language models. According to him, hallucination is not a flaw but one of the greatest features of LLMs.

Even after this, Andrej says that if users are concerned and see the hallucination as a big problem, we should start looking for the solution. Since the start of AI chatbots in 2022, researchers and developers have been aware of this problem and many have even started working to find a solution.

AI chatbots' hallucinations concern users, but experts debate if it's a flaw or feature.
Photo: DIW-AIgen

Read next: Tests reveal AI's potential to lie under pressure, posing risks in real-world scenarios

Previous Post Next Post