Study Reveals OpenAI's Whisper AI Hallucinates Abusive Language and Fabricates Sentences

Whisper is a speech to text AI tool developed by OpenAI. It was revealed that this AI tool hallucinates abusive language or makes up sentences and phrases. Most of the time, Whisper hallucinates the conversation when there are long pauses between the sentences that the user is saying. Researchers from Cornell did a study named "Careless Whisper: Speech-to-Text Hallucination Harms" to find all the flaws Whisper is showing. According to study, whenever the user speaks a bit strongly or shows some pauses between sentences, Whisper categorizes it as hate speech. Sometimes, it even makes up the conversation itself.

An author of the study said that Whisper is showing signs of hallucinations where it makes up things out of nothing. The author also added that these types of hallucinations are alarming because if the transcripts from Whisper are used in a court hearing or medical reports, it can cause big problems.

Whisper was released by OpenAI in 2022 and it was trained on 680,000 hours of audio data. OpenAI has said that Whisper can transcript any data with accuracy of human level. Koenecke said that OpenAI has improved the model of Whisper and now users are experiencing less hallucinations with it. In the study, researchers also found out that only 1% of the Whisper’s entire transcriptions contained all hallucination sentences.

Sometimes Whisper also added random names, irrelevant information, fake addresses and fake websites. There were also instances where Whisper transcribed a sentence accurately but added some additional sentences which had words like gun, kill and terror. The research was done by running 1300 speech clips on Whisper.

Image: DIW-Aigen

Read next: Free Android VPNs Are Costing Users Their Security And Privacy, Experts Warn
Previous Post Next Post