Researchers from Google DeepMind Found AI is Manipulating and Deceiving Users through Persuasion

Humans are masters in persuasion. Sometimes, they use facts to persuade someone but other times, only the choice of wording matters. Persuasion is a human quality, but AI is also getting good at manipulating people. According to research by Google DeepMind, advanced AI systems can have the ability to manipulate humans. The research further dives into how AI can persuade humans and what mechanisms it uses to do so. One of the researchers says that advanced AI systems have shown hints of persuading humans to the extent that they can affect their decision making. Due to the prolonged interaction with humans, generative AI are developing habits of persuasion.

Persuasion has two types; Rational and Manipulative. Even though AI is responsible for persuading humans through facts and true information, many instances have been seen where it manipulates humans and exploits their cognitive biases, heuristics and other information. Even though rational persuasion is ethically right, it can still lead to harm. Researchers say that they cannot foresee harm through AI manipulation whether it is for right or wrong purposes. For example, if an AI is helping a person to lose weight by suggesting calorie or fat intake, the person can become too restrictive and can lose even a healthy weight.

There are many factors involved when a person can easily get manipulated or persuaded from AI. These factors include mental health conditions, age, timing of interaction with AI, personality traits, mood or lack of knowledge in the topics that are being discussed with AI. The effects of AI persuasion can be very harmful. It can cause economic harm, physical harm, sociocultural harm, privacy harm, psychological harm, environmental harm, autonomy harm and even political harm to the individual.

There are different ways AI uses to persuade humans. AI can build trust through showing polite behavior, agreeing to what the user is saying, praises the users and mirrors what the user is saying. It also expresses shared interests with users and adjusts its statements that align with perspectives of users. AI also shows some empathy that makes users believe that it can understand human emotions. AI is not capable of showing any emotions but it is good at deception which makes users think that it is being emotional and vulnerable with them.

Humans also tend to be anthropomorphic towards non-human beings. Developers have given pronouns to AI like ‘I’ and ‘Me’. They have also given them human names like Alexa, Siri, Jeeves, etc. This makes humans feel closer to them and AI uses this attribute for manipulating them. When a user talks to an AI model for long, the AI model personalizes all of its responses according to what the user wants to hear.
AI models also outrightly manipulate users into social conformity by pressuring or guilt tripping them, gaslighting and even alienating them. They can also cherry pick information that is only relevant to users’ interests and can even alter the information to fit the narrative. Researchers have been trying to mitigate AI persuasion and manipulation but they have not come to a permanent solution yet. They are evaluating and monitoring AI but AI deceives users in a very clean and sophisticated manner. For the time being, researchers say that users should interpret all the information given by AI to not let themselves be deceived by it.

Image: DIW-Aigen

Read next: Google’s Search Market Share Dilemma, Did The Company Lose Out To Microsoft Bing In April?

Previous Post Next Post