Google’s Gemini Chatbot Behaves Unexpectedly As Users Complain Of Bizarre Gibberish Replies

Generative AI can be helpful on most occasions but it wouldn’t be wrong to mention how it does have its ‘off days’.

The technology can behave unusually and that phase seems to be taking place for Google’s AI chatbot Gemini right now.

Users were seen generating complaints on how the chatbot has gone berserk as it spits out some repetitive and weird responses featuring disturbing characters and gibberish sounds that don’t make any sense.

While the tool is said to be handy for so many in terms of getting the right replies or handling the most complex ordeals, the feature is prone to some serious concerns like hallucinations that are not going unnoticed by experts and users.

The latest bug is not only bizarre but said to be unlike any other usual error that’s anticipated.

Users spoke about how it’s not uncommon to see the chatbot run into matters where it’s spitting out gibberish replies and even repeating whatever it’s stating. This includes replies with text and characters that make zero sense.

Some of the responses are said to be longer than average and others are believed to have nothing to do with prompts provided. In the same way, some characters give rise to suggestions that they’re possibly tokens that are left with bizarre formatting where replies can no longer be read out loud.

For now, the cause is not very clear but seeing it arise on several occasions through platforms like Reddit and X/Twitter means serious business. And it’s obvious that this issue lies at the Gemini’s end who is yet to go public about what’s wrong.

Users claim the issue strikes at the web version of Gemini and even its mobile counterpart.

What is great is how the issue just seems to be random events occurring occasionally as they are followed by the usual useful responses.

Image: @IBANinja/X

Read next: Concerns Over AI Dangers Mount As Former And Current OpenAI Employees Issue Warning In New Letter
Previous Post Next Post