A peer reviewed study published in Ethics and Information Technology by University of Exeter researchers Joel Krueger and Lucy Osler examines how generative AI chatbots can produce false or misleading content that can meet the specific structural criteria of what the authors describe as “AI gossip,” potentially contributing to social and reputational harm.
The paper focuses on widely used consumer facing systems such as OpenAI’s ChatGPT and Google’s Gemini, which are powered by large language models. According to the authors, these systems are trained on extensive collections of text and generate responses by predicting likely word sequences. As a result, they can produce statements that appear authoritative without regard for whether those statements are true. "For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullsh*t stereotypes or discriminatory information propagated by these chatbots", explains the paper.
The study builds on prior arguments that such outputs are better understood as “bullsh*t,” in the philosophical sense defined by Harry Frankfurt, rather than as hallucinations or lies. In this framing, the systems are not presented as conscious or intentional agents, but as tools designed to generate truth-like language without concern for accuracy.
Krueger and Osler argue that some chatbot outputs can also be understood as gossip. They adopt a "thin" definition of gossip as communication involving a speaker, a listener, and an absent third party, where the information goes beyond common knowledge and includes an evaluative judgment, often negative. While chatbots lack awareness, motives, or emotional investment, the authors maintain that their outputs can still meet these structural criteria.
To illustrate this claim, the paper examines a documented case involving Kevin Roose, a technology reporter for The New York Times. After Roose published accounts of an unsettling interaction with a Microsoft Bing chatbot in early 2023, users subsequently discovered that other chatbots were generating negative character evaluations about him when asked about his work. According to the study, these responses typically combined basic biographical information with unsubstantiated evaluative claims, such as suggestions of sensationalism or questionable journalistic practices.
The authors distinguish between two forms of AI gossip. In bot to user gossip, a chatbot delivers evaluative statements about an absent person to a human user. In bot to bot gossip, similar information is drawn from online content and incorporated into training data, then propagated between systems without direct human involvement. The paper argues that the second form may pose greater risks because it can spread silently, persist over time, escape human oversight, and lacks the social constraints that normally moderate human gossip.
The study situates these effects within what the authors call “technosocial harms,” meaning harms that arise in interconnected online and offline environments. Examples discussed in the paper include reputational damage, defamation, informal blacklisting, and emotional distress. The authors reference documented legal disputes in which individuals alleged that AI systems produced false claims about criminal or professional misconduct, illustrating how such outputs can affect employment prospects, public trust, and social standing.
Krueger and Osler emphasize that these risks do not arise from malicious intent on the part of AI systems. Instead, they argue that responsibility rests with the human designers and institutions that build, deploy, and market these technologies. The paper concludes that recognizing certain forms of AI misinformation as gossip, rather than as isolated factual errors, helps clarify how these systems can produce broader social effects and why greater ethical scrutiny is warranted as AI tools become more embedded in everyday life.
Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, and published by humans. Image: DIW-Aigen
Read next:
• AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026
• Five myths about learning a new language – busted
The paper focuses on widely used consumer facing systems such as OpenAI’s ChatGPT and Google’s Gemini, which are powered by large language models. According to the authors, these systems are trained on extensive collections of text and generate responses by predicting likely word sequences. As a result, they can produce statements that appear authoritative without regard for whether those statements are true. "For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullsh*t stereotypes or discriminatory information propagated by these chatbots", explains the paper.
The study builds on prior arguments that such outputs are better understood as “bullsh*t,” in the philosophical sense defined by Harry Frankfurt, rather than as hallucinations or lies. In this framing, the systems are not presented as conscious or intentional agents, but as tools designed to generate truth-like language without concern for accuracy.
Krueger and Osler argue that some chatbot outputs can also be understood as gossip. They adopt a "thin" definition of gossip as communication involving a speaker, a listener, and an absent third party, where the information goes beyond common knowledge and includes an evaluative judgment, often negative. While chatbots lack awareness, motives, or emotional investment, the authors maintain that their outputs can still meet these structural criteria.
To illustrate this claim, the paper examines a documented case involving Kevin Roose, a technology reporter for The New York Times. After Roose published accounts of an unsettling interaction with a Microsoft Bing chatbot in early 2023, users subsequently discovered that other chatbots were generating negative character evaluations about him when asked about his work. According to the study, these responses typically combined basic biographical information with unsubstantiated evaluative claims, such as suggestions of sensationalism or questionable journalistic practices.
The authors distinguish between two forms of AI gossip. In bot to user gossip, a chatbot delivers evaluative statements about an absent person to a human user. In bot to bot gossip, similar information is drawn from online content and incorporated into training data, then propagated between systems without direct human involvement. The paper argues that the second form may pose greater risks because it can spread silently, persist over time, escape human oversight, and lacks the social constraints that normally moderate human gossip.
The study situates these effects within what the authors call “technosocial harms,” meaning harms that arise in interconnected online and offline environments. Examples discussed in the paper include reputational damage, defamation, informal blacklisting, and emotional distress. The authors reference documented legal disputes in which individuals alleged that AI systems produced false claims about criminal or professional misconduct, illustrating how such outputs can affect employment prospects, public trust, and social standing.
Krueger and Osler emphasize that these risks do not arise from malicious intent on the part of AI systems. Instead, they argue that responsibility rests with the human designers and institutions that build, deploy, and market these technologies. The paper concludes that recognizing certain forms of AI misinformation as gossip, rather than as isolated factual errors, helps clarify how these systems can produce broader social effects and why greater ethical scrutiny is warranted as AI tools become more embedded in everyday life.
Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, and published by humans. Image: DIW-Aigen
Read next:
• AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026
• Five myths about learning a new language – busted