When people go online looking for information, they often end up circling back to what they already believe. A new research project has shown that this tendency isn’t just about what’s on the internet, it’s also about the words people type into the search bar. Across many topics, from health and energy to crime and finance, people tend to search in ways that point toward answers they already find comfortable or familiar.
This pattern, known as the narrow search effect, surfaced over a wide set of experiments. Whether people were using Google, AI-driven search engines, or even chatbots, their searches usually lined up with their existing beliefs. And that’s where the trouble begins. When search engines try to be helpful by showing the most relevant answers, they often feed back results that fit those narrow search terms too closely. So, people end up finding information that just tells them what they expected to hear.
In one part of the study, participants searched about caffeine’s health effects. Some looked for benefits, others focused on risks. Unsurprisingly, their beliefs shifted based on what they saw. Even when people searched out of genuine curiosity, without trying to prove themselves right, their search terms quietly shaped the answers they found.
The researchers didn’t stop there. They tried to see if simple changes could nudge people toward more balanced understanding. They asked participants to conduct additional searches or to think about what might have happened if they had searched differently. But these soft interventions didn’t do much as people mostly stuck to their starting positions.
However, the study found a clearer path forward when the search results themselves were adjusted. When people were presented with results that mixed both sides, covering pros and cons, risks and benefits, they were more likely to update their views. This worked whether the results came from a traditional search engine or from specially designed AI chatbots that could offer broader answers. Surprisingly, people didn’t find these broader results less useful or less relevant. They engaged with them just as readily.
What’s striking is that most participants didn’t think they were trying to confirm their beliefs. They weren’t deliberately searching to prove themselves right, yet their search terms still leaned in that direction. It seems this habit happens quietly, without most people even realizing it.
The study also looked at AI chatbots like ChatGPT. When the chatbots provided narrow answers, focused tightly on the user’s search terms, people’s beliefs stayed close to where they started. But when the chatbots offered answers that included wider viewpoints, covering the topic from different angles, people were more open to adjusting their opinions.
Interestingly, some AI search tools, like Microsoft’s updated Bing, sometimes already try to broaden narrow search queries automatically. But this isn’t applied consistently. The researchers suggested that simple features, like a "Search Broadly" button, could help people reach a wider set of information and step outside their usual belief bubbles.
Still, broadening search isn’t always the right answer. When someone’s looking for something straightforward, like business hours or the location of a building, giving extra information could just make things more confusing. There’s also the risk that when misinformation is common, broader searches might accidentally pull in false or misleading information if the system isn’t carefully designed.
Despite these limits, the research points to a clear lesson, i.e., search engines and AI chatbots can be improved to help people see the bigger picture. By offering a mix of narrow and broad results, search tools can guide people toward more balanced thinking. This might help reduce belief polarization, which has become a growing challenge in many societies.
The study suggests that small changes in how information is delivered could make a real difference. When people are gently steered toward a wider view, they seem willing to explore it. And perhaps, with search engines that widen the lens instead of tightening it, more people could begin to step out of their own information bubbles.
Notes: Image: DIW-Aigen. This post was edited/created using GenAI tools.
Read next: UK Regulator Looks to Tighten Grip on Illegal Content and Child Safety Online
This pattern, known as the narrow search effect, surfaced over a wide set of experiments. Whether people were using Google, AI-driven search engines, or even chatbots, their searches usually lined up with their existing beliefs. And that’s where the trouble begins. When search engines try to be helpful by showing the most relevant answers, they often feed back results that fit those narrow search terms too closely. So, people end up finding information that just tells them what they expected to hear.
In one part of the study, participants searched about caffeine’s health effects. Some looked for benefits, others focused on risks. Unsurprisingly, their beliefs shifted based on what they saw. Even when people searched out of genuine curiosity, without trying to prove themselves right, their search terms quietly shaped the answers they found.
The researchers didn’t stop there. They tried to see if simple changes could nudge people toward more balanced understanding. They asked participants to conduct additional searches or to think about what might have happened if they had searched differently. But these soft interventions didn’t do much as people mostly stuck to their starting positions.
However, the study found a clearer path forward when the search results themselves were adjusted. When people were presented with results that mixed both sides, covering pros and cons, risks and benefits, they were more likely to update their views. This worked whether the results came from a traditional search engine or from specially designed AI chatbots that could offer broader answers. Surprisingly, people didn’t find these broader results less useful or less relevant. They engaged with them just as readily.
What’s striking is that most participants didn’t think they were trying to confirm their beliefs. They weren’t deliberately searching to prove themselves right, yet their search terms still leaned in that direction. It seems this habit happens quietly, without most people even realizing it.
The study also looked at AI chatbots like ChatGPT. When the chatbots provided narrow answers, focused tightly on the user’s search terms, people’s beliefs stayed close to where they started. But when the chatbots offered answers that included wider viewpoints, covering the topic from different angles, people were more open to adjusting their opinions.
Interestingly, some AI search tools, like Microsoft’s updated Bing, sometimes already try to broaden narrow search queries automatically. But this isn’t applied consistently. The researchers suggested that simple features, like a "Search Broadly" button, could help people reach a wider set of information and step outside their usual belief bubbles.
Still, broadening search isn’t always the right answer. When someone’s looking for something straightforward, like business hours or the location of a building, giving extra information could just make things more confusing. There’s also the risk that when misinformation is common, broader searches might accidentally pull in false or misleading information if the system isn’t carefully designed.
Despite these limits, the research points to a clear lesson, i.e., search engines and AI chatbots can be improved to help people see the bigger picture. By offering a mix of narrow and broad results, search tools can guide people toward more balanced thinking. This might help reduce belief polarization, which has become a growing challenge in many societies.
The study suggests that small changes in how information is delivered could make a real difference. When people are gently steered toward a wider view, they seem willing to explore it. And perhaps, with search engines that widen the lens instead of tightening it, more people could begin to step out of their own information bubbles.
Notes: Image: DIW-Aigen. This post was edited/created using GenAI tools.
Read next: UK Regulator Looks to Tighten Grip on Illegal Content and Child Safety Online