Google

Gemini AI can now understand what’s on your Android screen

Published

on

Google has started rolling out a new feature that lets its Gemini AI assistant understand what you’re looking at on your phone screen. This update is now available for Android users in the US and works through the Gemini app or the Assistant toggle.

Once this feature is active, you’ll see a small Gemini overlay at the corner of your screen. You can ask it questions about whatever is on display — like a recipe, a news article, or a shopping page — and it will respond based on what it sees. For example, if you’re viewing a news article, you can ask Gemini to summarize it. If you’re shopping, it might help compare prices or provide reviews.

This “circle to search” style of interaction is built to feel more natural. You can either speak, type, or even draw a circle around something, and Gemini will know what you’re referring to. However, not all apps are supported yet, especially those with extra privacy protections like banking apps.

The new feature is part of Google’s push to make Gemini a true replacement for Google Assistant. Over time, it may offer more smart actions, especially as Gemini becomes more integrated with Android and Google services.

While this rollout is currently limited to English-speaking users in the US, more regions and languages are expected to be added in future updates. If you don’t see the feature yet, make sure you’ve updated your Gemini app and enabled the on-screen context option in the settings.

Trending

Exit mobile version