Google has unveiled an expanded rollout of its AI-powered Search Live feature, adding new ways for users to engage with the world through their smartphone cameras. Announced during its I/O developer event, the tool blends visual understanding and real-time conversation, allowing users to point their camera at any object and receive instant responses from Google Search.
Initially developed under Project Astra, the feature first appeared in Gemini Live on Android, enabling live camera sharing with Google’s AI. It allowed the system to interpret real-world scenes—like identifying ingredients or suggesting related content—by continuously analyzing the camera feed. Now, Google is bringing that same functionality directly into Search’s evolving AI Mode, making it accessible to a broader audience.
Users will be able to activate the experience by tapping a new Live icon, available in both Google Lens and AI Mode. Once the camera is active, they can ask context-aware questions, and Search Live will reply with tailored answers, helpful links, and relevant media—adapting to whatever the user points at.
The update doesn’t stop there. Google is also integrating Search Live into the Gemini app for iOS, marking the feature’s first arrival on Apple devices. Previously limited to Android, it was tested on the Pixel 9 and Galaxy S25 before expanding to wider availability. Although initially positioned as a premium-only feature under Gemini Advanced, Google has now made it freely accessible across both Android and iOS platforms.
Rollout is expected to begin later this summer, with early access available through Google Labs. The tool joins a wider push toward a more immersive AI-driven experience in Search, with upcoming features like Deep Search for research tasks and web automation agents also on the roadmap.
By placing vision and voice at the center of search, Google is reshaping how people explore their surroundings—merging observation and inquiry into a seamless interaction layer powered by real-time AI understanding.
Read next:
• When TikTok Says Log Off, Some Users Stay Longer
• The Hidden Chemicals in Teflon Cookware You Use Daily
Initially developed under Project Astra, the feature first appeared in Gemini Live on Android, enabling live camera sharing with Google’s AI. It allowed the system to interpret real-world scenes—like identifying ingredients or suggesting related content—by continuously analyzing the camera feed. Now, Google is bringing that same functionality directly into Search’s evolving AI Mode, making it accessible to a broader audience.
Users will be able to activate the experience by tapping a new Live icon, available in both Google Lens and AI Mode. Once the camera is active, they can ask context-aware questions, and Search Live will reply with tailored answers, helpful links, and relevant media—adapting to whatever the user points at.
The update doesn’t stop there. Google is also integrating Search Live into the Gemini app for iOS, marking the feature’s first arrival on Apple devices. Previously limited to Android, it was tested on the Pixel 9 and Galaxy S25 before expanding to wider availability. Although initially positioned as a premium-only feature under Gemini Advanced, Google has now made it freely accessible across both Android and iOS platforms.
Rollout is expected to begin later this summer, with early access available through Google Labs. The tool joins a wider push toward a more immersive AI-driven experience in Search, with upcoming features like Deep Search for research tasks and web automation agents also on the roadmap.
By placing vision and voice at the center of search, Google is reshaping how people explore their surroundings—merging observation and inquiry into a seamless interaction layer powered by real-time AI understanding.
Read next:
• When TikTok Says Log Off, Some Users Stay Longer
• The Hidden Chemicals in Teflon Cookware You Use Daily