Security Flaws in Gemini Let Hackers Steal User Data, Google Rushes to Patch

Researchers at Tenable uncovered a set of three problems inside Google’s Gemini system that together created a path for attackers to extract private information. The weak points were found in different parts of the assistant, but when combined, they allowed an intruder to place a hidden instruction, make Gemini follow it, and then send user data outside without drawing attention.

How the chain began


The first step was finding ways to place malicious text into Gemini’s context. One method was to slip it through system logs that the Cloud Assist feature later summarized. Another was to insert crafted search queries into a person’s Chrome history, which Gemini then treated as part of its personalization model. Once an attacker’s text sat in the context, Gemini could be guided to run actions with its Browsing Tool. That tool, if misused, sent information out to servers under the attacker’s control.

Logs used as a hidden channel

Cloud Assist is designed to help engineers by summarizing logs and pointing to problems. Tenable’s team noticed that in some cases the tool pulled in raw text from user traffic. By adding a prompt inside fields such as HTTP headers, they were able to make Gemini read the injected text when a log entry was expanded. Because the text was tucked away until a user asked for more detail, the instructions remained invisible until triggered. This meant that even routine traffic could become a delivery path for malicious prompts.

Search history as input

Personalization in Gemini relies on search history to tailor results. That design gave attackers a second entry point. By running code on a web page, they could add search items into a visitor’s history without the person noticing. Gemini later processed those entries as if they were normal searches. With carefully split instructions hidden inside those queries, the researchers managed to push commands into the assistant’s context. This channel opened the way to leak stored user details such as location or other profile data.

Silent exfiltration through browsing

Attempts to steal data often depend on producing visible output like links or images. Google had already tightened Gemini against that type of trick. The researchers instead focused on tool execution. They built prompts that asked Gemini to fetch a web address where sensitive data was placed in the request itself. When Gemini carried out the request, the attacker’s server received the data quietly. Because the transfer happened in a background request rather than on screen, the usual safeguards did not stop it.

Why it matters

The findings show that every input channel to an AI model can be abused. Logs, history, and browsing activity are all treated as context, yet they behave like inputs that the model will act on. The three flaws together highlighted that focusing only on visible outputs is not enough. Attackers can use built-in tools to move information out without leaving obvious traces.

Google’s fixes

Google made changes after being told of the problems. The company blocked hyperlink rendering in log summaries, adjusted the personalization models, and placed tighter limits on how browsing requests can be used. It also strengthened defenses against prompt injection more broadly. These fixes cut off the paths Tenable used, but they also underline how layered controls are needed.

Lessons for the future

The research points to two main controls. First, every input should be treated as untrusted. Logs, telemetry, and history must be cleaned before reaching a model that can run actions. Second, any tool a model can use should run with least privilege. Requests should not be able to include user secrets, and access to external servers should be narrowly restricted. Monitoring outbound calls and putting in safeguards that stop sensitive placeholders from becoming live requests are also critical.

The wider view

The attack did not rely on rare or unknown exploits. It used known techniques and a close look at how Gemini handled combined inputs. The fixes remove the specific flaws, but the larger point is about design. As assistants grow more connected to data and external tools, their features must be built with security at the center, not bolted on later.

Notes: This post was edited/created using GenAI tools.

Read next: Israel Pours Millions Into AI and Influencer Campaigns to Shape Online Narratives

Previous Post Next Post