Anthropic Co-Founder Says AI Needs to Make Mistakes to Be Useful

One thing that a lot of people tend to criticize generative AI quite heavily for is the high quantity of mistakes it makes at this current point in time. Countless examples of so called hallucinations have surfaced in which chatbots spit out information that can often be patently false, and this raises questions about the efficacy of this technology and its ability to fulfill its potential.

In spite of the fact that this is the case, the co-founder of Anthropic Jared Kaplan has stated that AI needs to make mistakes. With all of that having been said and now out of the way, it is important to note that he thinks AI will be less useful than might have been the case otherwise if it ends up making second guessing itself.

He refers to hallucinations as a trade off because of the fact that this is the sort of thing that could potentially end up allowing AI to provide answers more frequently. If the AI is trained to be hyper aware of the gaps in its knowledge, it might simply answer that it needs more context to practically every single question that it ends up receiving.

The question of accuracy also might have to do with ethics, since misinformation can be spread rather quickly through hallucinations. According to Anthropic’s co-founder, this is simply an unavoidable consequence of creating AI. The end goal might be to create an AI that is completely devoid of hallucinations, but that’s not going to be all that easy. Everyone makes mistakes, and creating an AI that is overly concerned with making mistakes is counterproductive, while at the same time these mistakes can be costly.

At the end of the day, ethical concerns are going to be a core part of the AI debate for the foreseeable future. These questions are critical in order to figure out if AI can manage the balancing act and provide useful benefits for the rest of society whilst at the same time maintaining as much accuracy as is feasible. It will be interesting to see where things go from here on out.

AI's ability to make mistakes, dubbed "hallucinations," is deemed essential by Anthropic's co-founder, Jared Kaplan, for enhancing usefulness.
Photo: Digital Information World - AIgen

Read next: Cops Accessing Encrypted Messages Is A Clear Violation Of Human Rights, EU Court Rules
Previous Post Next Post