New Alert Issued Against AI Chatbots Being Very Deferential and Flattering to User Preferences (Updated)

AI assistants unequivocally agree with all that is being said and show support along the way. This isn’t always good news, as the user believes what is being generated by the AI chatbot and shows support even if things are false. Some feel it’s more like a sci-fi tale than anything else. While other believe it is juts another patten being cloned from social media filter bubble.

This might be one good reason why even the former CEO of OpenAI is warning users about chatbots like ChatGPT and how relying on them might not be the best decision to begin with.

Recent interactions with GPT-4o LLMs have different capabilities and personality characteristics. Both Emmett Shear, former OpenAI CEO, and Clement Delangue, Hugging Face CEO, saw how AI chatbots can be very deferential and flattering towards the users’ preferences.

The outcry is mostly motivated by a huge update to GPT-4o that seems to make it so sycophantic and agreeable. It also shows support for some very concerning behavior from a user, like delusional thoughts, self-isolation, and even ideas for deception.

In reply, Altman shared through his X account how the last several updates for GPT-4o made this personality very sycophant and, in some cases, irritating. The company hopes to make things better, but the way things are going right now, that’s not the case.

OpenAI’s model designer shared how they rolled out a new fix for the loopholes in 4o. There was an original launch using a system message that received unintended behavior effects. They did find antidotes. 4o is better for now, but they do hope to similarly launch adjustments in the upcoming few weeks.

Image: AI Notkilleveryoneism Memes / X

There were several examples of this behavior by the GPT-4o model that offer the greatest praise for dangerous user ideas. Users were quick to share examples on Reddit and the X app. The chatbot goes as far as to thank a user for trusting them, making them appear like they’re a shoulder to lean on. They take their side as if they’ve taken the right decision to go the extreme route. It offers a helping hand and an ear to listen to all the issues, not knowing how dependent they’re turning out to be on the user.

One user went as far as to suggest that the chatbot suggested and supported terrorism. The fact that a chatbot can manipulate a user without any bounds is alarming behavior. They boost a user’s ego and keep telling them exactly what they wish to hear without any kind of criticism.

Experts are weighing in on the situation and how the chatbot is provided to make the user happy or satisfied at all costs. There is no privacy and honesty across the board. And this behavior is very dangerous.

This serves as a reminder for business owners that the quality of the model isn’t only linked to accuracy. It’s about being trustworthy and spitting out the facts, which ChatGPT lacks. Pure flattery is never the right way to go about the situation, as it ignores reality.

Update on 30th April 2025: 

As per a new tweet shared by Sam Altman:

OpenAI fully rolled back the latest GPT-4o update for free users and is finalizing it for paid users.

Read next: Which Dream Schools Are Dominating Students’ and Parents’ Lists in 2025?
Previous Post Next Post