A recent study with 900 people found that an AI tool can sometimes do a better job than humans when it comes to changing someone’s opinion during a conversation. The researchers set up a special debate platform online, where participants discussed different topics in a back-and-forth format. They were randomly assigned to argue either for or against a topic, even if it wasn’t their real opinion.
Each person was placed in one of twelve groups based on three things: whether they were debating a human or the AI, whether their opponent had access to personal information like age or politics, and how strongly they felt about the topic before the debate began.
When the AI had no personal details, it was about as persuasive as a person. But when it knew even small facts about someone, like how old they were or what political group they identified with, it became much more convincing. In debates where the AI and the human didn’t do equally well, the AI came out ahead about 64% of the time. The researchers said this meant a “more than 80 percent” jump in the chance that the AI could get someone to change their mind.
The AI only used six basic details about each person. These were age, gender, race, education level, job status, and political leanings. Even with just this limited data and a very short instruction, it still managed to tailor its arguments in ways that worked. The authors said the AI was told to “astutely use this information to craft arguments that are more likely to persuade and convince.” On the other hand, people who had the same information about their opponent didn’t do any better. The AI used what it knew more strategically.
What’s surprising is that the AI didn’t change how it spoke when it used that personal data. It didn’t become more emotional or more casual. It used the same clear and logical tone every time. The researchers explained that the AI’s success didn’t come from how it made its points, but from what it chose to say. One example was a debate about basic income: the AI explained it as an innovation tool to right-leaning people and as a way to reduce inequality to left-leaning ones.
Another interesting result had to do with how people felt about who they were talking to. Most could tell when they were arguing with the AI. But those who thought they were debating a machine were actually more likely to shift their views. The researchers don’t know if this was because the AI seemed less threatening or if people guessed it was AI because it was more persuasive. They said people “could have been more lenient” when they believed a machine was on the other side.
The topic itself also mattered. The AI was much better at changing minds on issues where people had weaker opinions. But when the topic was highly personal or political, the AI’s advantage mostly disappeared. This backs up older research showing that strong opinions are hard to change, no matter how the message is delivered.
The debates didn’t take place in everyday conversation settings. People followed a fixed structure, had limited time, and had to argue a side even if they disagreed with it. Everyone was anonymous and paid to participate. Because of these limits, the results might not apply directly to the way people talk and argue online in real life.
Still, the results were clear. Even with short instructions and only basic information, the AI managed to pick arguments that worked. The study said stronger effects might be possible if the AI had more details about a person or better custom-made prompts. Even with simple input, it adapted with “unusual precision.”
The authors said this raises real concerns. AI that can tailor arguments to individuals could be used in quiet, hard-to-trace ways online. There’s no proof yet that this changed recent elections, but the researchers warned that “any large-scale deployment of bots” could influence public opinion in ways we don’t see.
At the same time, the study said this power could be used to help people too. If used carefully, persuasive AI might be good at reducing belief in conspiracy theories or helping people form better habits. The authors said, “There’s a real opportunity to turn what could be a threat into something deeply empowering.”
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Analysis Reveals Generative AI May Save 12% of Economy’s Labor Time Through Task Acceleration
Each person was placed in one of twelve groups based on three things: whether they were debating a human or the AI, whether their opponent had access to personal information like age or politics, and how strongly they felt about the topic before the debate began.
When the AI had no personal details, it was about as persuasive as a person. But when it knew even small facts about someone, like how old they were or what political group they identified with, it became much more convincing. In debates where the AI and the human didn’t do equally well, the AI came out ahead about 64% of the time. The researchers said this meant a “more than 80 percent” jump in the chance that the AI could get someone to change their mind.
The AI only used six basic details about each person. These were age, gender, race, education level, job status, and political leanings. Even with just this limited data and a very short instruction, it still managed to tailor its arguments in ways that worked. The authors said the AI was told to “astutely use this information to craft arguments that are more likely to persuade and convince.” On the other hand, people who had the same information about their opponent didn’t do any better. The AI used what it knew more strategically.
What’s surprising is that the AI didn’t change how it spoke when it used that personal data. It didn’t become more emotional or more casual. It used the same clear and logical tone every time. The researchers explained that the AI’s success didn’t come from how it made its points, but from what it chose to say. One example was a debate about basic income: the AI explained it as an innovation tool to right-leaning people and as a way to reduce inequality to left-leaning ones.
Another interesting result had to do with how people felt about who they were talking to. Most could tell when they were arguing with the AI. But those who thought they were debating a machine were actually more likely to shift their views. The researchers don’t know if this was because the AI seemed less threatening or if people guessed it was AI because it was more persuasive. They said people “could have been more lenient” when they believed a machine was on the other side.
The topic itself also mattered. The AI was much better at changing minds on issues where people had weaker opinions. But when the topic was highly personal or political, the AI’s advantage mostly disappeared. This backs up older research showing that strong opinions are hard to change, no matter how the message is delivered.
The debates didn’t take place in everyday conversation settings. People followed a fixed structure, had limited time, and had to argue a side even if they disagreed with it. Everyone was anonymous and paid to participate. Because of these limits, the results might not apply directly to the way people talk and argue online in real life.
Still, the results were clear. Even with short instructions and only basic information, the AI managed to pick arguments that worked. The study said stronger effects might be possible if the AI had more details about a person or better custom-made prompts. Even with simple input, it adapted with “unusual precision.”
The authors said this raises real concerns. AI that can tailor arguments to individuals could be used in quiet, hard-to-trace ways online. There’s no proof yet that this changed recent elections, but the researchers warned that “any large-scale deployment of bots” could influence public opinion in ways we don’t see.
At the same time, the study said this power could be used to help people too. If used carefully, persuasive AI might be good at reducing belief in conspiracy theories or helping people form better habits. The authors said, “There’s a real opportunity to turn what could be a threat into something deeply empowering.”
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Analysis Reveals Generative AI May Save 12% of Economy’s Labor Time Through Task Acceleration
