Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.
We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.
These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.
The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.
Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.
People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.
In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.
Chatbot ads have added power
A single prompt to a chatbot can reveal a lot more about a user than the person might expect.
A 2024 study showed that large language models can infer a wide range of personal data, preferences and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.
To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.
For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.
Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result: Although ads made the chatbot perform 3% to 4% worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the nonadvertising responses. They even said the ad-infused responses felt more friendly and helpful.
Knowing you to persuade you
This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.
But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.
Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.
This type of autonomous interrogation is feasible, aligns with AI companies’ business models and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.
But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.
Here are some steps you can take to try to detect AI chatbot advertising.
Look for any disclosure text – words such as “ad,” “advertisement” and “sponsored” – even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
Think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
An unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.

Brian Jay Tang, Ph.D. Candidate in Computer Science and Engineering, University of Michigan and Kang G. Shin, Emeritus Professor of Computer Science, University of Michigan
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Reviewed by Irfan Ahmad.
Read next: Study Finds Language Models Can Distinguish Between Realistic, Unlikely, Impossible, and Nonsensical Events

