Key Points
Google’s Gemini AI can now access Photos, Gmail, Search history and other apps through a feature called Personal Intelligence.
The update allows AI to generate highly personalised responses and images using users’ own data and memories.
While Google says the feature is opt-in and privacy-safe, critics warn of risks around data use and “data bleed”.
Google has rolled out a major update to its Gemini AI assistant in April 2026, allowing it to access user data across its ecosystem. The chatbot will now be able to use personal data from Google Photos, Gmail, Search history and YouTube activity, as part of a new feature called “Personal Intelligence”.
Once enabled, Gemini can draw from user notes and labels, emails, photos, browsing patterns and stored preferences to generate personalized responses and images that reflect a user’s life more closely.
At the centre of the rollout is Gemini’s integration with Google Photos. The AI can now scan a user’s photo library, including labelled faces, relationships and past moments, to create customised images. Instead of manually uploading reference images or writing detailed prompts, users can rely on the system to “fill in the blanks” using existing data.
Google says this makes AI interactions more intuitive. A user could request an image such as a family vacation or a personal scenario, and Gemini would generate it using stored visual references. A “Sources” option allows users to see which images were used to guide the output, and prompts can be refined if results are inaccurate.
The feature is powered by the Nano Banana 2 image generator and is designed to reduce the need for complex prompts. By connecting multiple apps, Gemini can also offer contextual assistance beyond images – referencing past emails to suggest appointments, using search history to recommend content, or drawing from photos to infer preferences.
“Personal Intelligence gives Gemini an inherent understanding of your preferences from the start,” Google said, adding that it allows the system to work with real-life context rather than abstract instructions.
The rollout is currently limited to paid Gemini subscribers in the United States, including Google AI Plus, Pro and Ultra users, with plans to expand to more regions and integrate further into Chrome and Search.
The update, however, has triggered significant privacy concerns since it allows AI systems to be trained on sensitive personal data.
One key concern is the blending of information across different contexts, which could lead to “data bleed” – where data from private emails, personal photos or browsing history could surface unexpectedly in unrelated interactions.
There are also concerns about how personal content is reused. Since Gemini can draw from images of family members, friends or pets stored in Google Photos, questions have been raised about consent, especially when those individuals have not directly interacted with the AI system.
Google has attempted to address these concerns by emphasising that the feature is not enabled by default. Users must opt-in and can choose which apps to connect. At the same time, Google acknowledges that AI may misinterpret context or make incorrect assumptions based on available data. For instance, repeated patterns in photos or emails could lead the system to draw inaccurate conclusions about a user’s interests.
As AI systems become more embedded in everyday digital life, the question for users is no longer just what these tools can do, but how much personal data they are willing to share to make them work.
[DS]
Suggested Reading:
Subscribe to our channels on YouTube and WhatsApp
Download our app on Play Store