The study, conducted by Yext, based on a survey of 2,237 adults from four countries, the United States, United Kingdom, France, and Germany, shows how people are blending AI search into their online routines. Three out of every four respondents (or 75% or be exact) said they are using AI search more than they were a year ago. Meanwhile, 43% said they now use AI tools daily or more. This signals a clear behavioral shift, especially as traditional search methods are starting to lose ground.
But the shift hasn’t translated into full trust. Many users say the experience falls short in specific, practical ways. When asked what frustrates them most about AI-powered search, 40% pointed to poor handling of complex or multi-part queries. These are not abstract complaints. For instance, a single travel plan involving multiple stops, price filters, or scheduling conditions often leads to inconsistent or shallow AI results. This leaves users forced to double-check information elsewhere or reconstruct questions to get anything useful back.
Another 37% of users cited a lack of clear, trustworthy answers with proper sources. When an AI model produces content, it often does so without visibly citing where that information came from, making it hard for people to verify what they’re reading. That absence of traceability affects not only personal confidence in the result but also the user’s willingness to act on it.
Beyond credibility and logic, usability came into question as well. Roughly one-third of respondents (34%) said AI tools do not provide actionable next steps, particularly when dealing with service-related queries such as “how do I switch mobile providers” or “what to do after applying for a loan.” Without clear direction or links to take further action, users are left with generic advice that lacks follow-through.
The difficulty in comparing local options was a common frustration for 31% of respondents. For local discovery, such as finding the best plumber nearby or comparing prices between local clinics, AI tools tend to return broad answers, often missing location-specific context. In these cases, users still rely more heavily on traditional search platforms or directory-style services to get detailed comparisons.
Personalization also remains a weak point. Thirty percent of users said the results don’t reflect their preferences or search history, which makes AI outputs feel disconnected or too generalized. The tools often provide a “one-size-fits-all” answer, even in cases where a returning user expects some continuity in recommendations.
Smaller but still significant issues were also flagged. One in five users (20%) noted that AI tools fail to summarize long-form content accurately, especially when the content requires interpretation or nuance, such as policy briefings, academic papers, or medical information sheets.
Across all these shortcomings, only 3% of respondents chose “Other,” suggesting that the main issues identified, complexity, trustworthiness, comparability, actionability, personalization, and summarization, capture the vast majority of user concerns today.
This disconnect between rising usage and persistent doubts has a direct impact on how brands show up in AI-driven environments. On one side, people are turning to AI with increasing frequency. On the other, they’re second-guessing the very results they receive. That tension offers both a warning and an opportunity.
The warning is straightforward: if the data used by AI tools to represent a brand is incomplete, inconsistent, or not updated in structured form, the brand risks being misrepresented, or worse, excluded entirely. A system that relies on pattern recognition and aggregated knowledge can easily skip over businesses that haven’t prepared their information in a machine-readable way. If an address is missing, a product spec is wrong, or a business category is unclear, AI systems may simply route users elsewhere.
The opportunity, however, lies in precision. Trust can be built by filling in the accuracy gaps. That starts by verifying that every piece of information, from store hours to product attributes to customer reviews, is both correct and formatted in a way that AI models can interpret cleanly. Structured data doesn’t just improve visibility, it directly improves the quality of answers that AI systems generate, which in turn shapes user trust.
In environments where AI tools generate summaries, compare listings, or offer direct responses instead of links, brands must take control of the raw data that fuels those outcomes. The more accurate the information is at the source, the less likely the system is to produce misleading summaries or omit a brand entirely.
As people use AI more, they’re expecting more. That means brands can no longer treat AI visibility as a bonus, it is fast becoming a baseline requirement. But usage alone doesn't equal loyalty. Accuracy, context, and trust are still the currency that determines whether people follow through after asking a question.
The takeaway is clear: while AI-powered search has become routine for many, satisfaction is still conditional. The next phase of competition won’t revolve solely around presence in AI tools, but on how trustworthy, complete, and actionable that presence feels to the person using it.
Read next: AI Chatbots Often Overconfident Despite Errors, Researchers Say