AI Conversations Weaken Conspiracy Beliefs, Regardless of Messenger

A new research project shows that short conversations with a large language model can chip away at confidence in conspiracy theories and other shaky beliefs. The work drew on data from 955 participants who had each shared a belief, rated how certain they felt about it, and then held a brief two-round chat with GPT-4o. The model responded with evidence and targeted reasoning. What stands out is that the drop in confidence appeared even when people thought they were talking to a human expert and not a machine.

The research team designed the experiment to test whether the source mattered or whether people simply reacted to the strength of the arguments. Participants were randomly told that the system was either an AI tool or an expert. Some saw a neutral tone. Others saw a more conversational style. Yet belief change followed the same pattern across all groups. The label did not shift outcomes, and tone did not push the needle either.

Participants who began the experiment with a conspiracy theory saw the sharpest drop. On average, confidence fell by about 10 points in the main study, which works out to an 11.81 percent decrease. Those who held a nonconspiratorial but unsupported belief showed a smaller drop of roughly 5 points, a 5.96 percent decrease. Both shifts were statistically reliable. The follow-up article put the numbers in simpler terms, noting that conspiracy beliefs slid by about 12 points on average, while other unsupported ideas slipped by about 6.

The researchers controlled for the strength of the belief at the start. They tested for differences tied to framing the speaker as an AI versus an expert and found none. They tested for differences tied to neutral tone versus human-like tone and again found none. A Bayes Factor analysis strongly favored the idea that speaker framing and tone had no meaningful role. The only consistent factor was the type of belief people held when they entered the study.

Many participants in the expert condition thought they were talking with a person. In an open-ended item at the end, 45 percent described the speaker as human while 21 percent suspected an AI. These perceptions still showed no link to belief change. Even when the researchers re-ran the models with controls for whether the participant had detected the AI, the main pattern held.

The language people used revealed small shifts. Those who thought they were interacting with an AI wrote in a simpler and less varied vocabulary. This fits earlier work suggesting people sometimes simplify their language when they think a machine is listening. The overall structure of their arguments remained about the same across conditions. Syntactic complexity and use of reasoning markers showed no reliable differences.


The study team interpreted the results as evidence that high-quality, targeted counterarguments matter far more than the identity of the messenger. The model helped because it could pull relevant information quickly and organize it in a clear sequence. The researchers noted that a human could produce the same effect if they assembled the same set of facts and delivered them with similar clarity. The difficulty comes from the amount of work required to gather and present that information in real time, which most people cannot do on the spot.

The larger conclusion is that conspiracy beliefs and other unsupported ideas may not be as immovable as often portrayed. When people receive explanations that directly address the belief they described, with evidence tied to the exact claim, confidence can shift. The effect is not dramatic at the individual level, yet even small changes can matter when applied across many interactions.

The authors also pointed out the limits of the work. GPT-4o was trained on data dominated by English and Western sources, which shapes its tone and reasoning. That may limit how well these results apply across cultures. The team plans to explore how belief revision works in different contexts and with different types of beliefs, including those tied more closely to identity than to factual misunderstandings.

The project screened more than 8,000 initial participants and ended with 506 people in the conspiracy condition and 449 in the epistemically suspect condition. Attrition was higher in the expert framing, yet even under the conservative assumption that dropouts showed no belief change, the main results held.

Across both sources, the evidence points to a simple idea. People react to strong, specific, and well-structured evidence, no matter whether they believe it came from a machine or a person. The label does not explain the shift. The arguments do.

Notes: This post was edited/created using GenAI tools.

Read next: Google Introduces Gemini 3 with Expanded Search and AI Capabilities
Previous Post Next Post