Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 17 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Google Expands Gemini Personalisation to Free Users, Raising Privacy Trade-offs

The tech giant opens its context-aware AI assistant to millions, but users should understand what opt-in personalization really means

Google Expands Gemini Personalisation to Free Users, Raising Privacy Trade-offs
Image: Engadget
Key Points 3 min read
  • Google is expanding Personal Intelligence to free Gemini app, Chrome and AI Mode users in the US, starting today, with international rollout to follow.
  • The feature pulls information from Gmail, Photos, YouTube, Maps and Search to provide personalized responses, disabled by default but optional to enable.
  • Privacy researchers caution that personalization features create new surfaces for data use, including potential advertising targeting and model training.
  • Google states it does not train its core models directly on personal data, though chat conversations may be used to improve services subject to user settings.

Google is rolling out a personalization tool to free users of its Gemini chatbot across the United States, expanding a feature previously available only to paid subscribers. The Personal Intelligence feature is now becoming available to free users in the Gemini app, Gemini in Chrome, and AI Mode, starting with AI Mode today, with the Gemini app and Chrome to follow in coming weeks.

Google introduced Personal Intelligence at the start of the year as a Gemini feature that allows the chatbot to pull information from the user's other Google apps and services to generate personalized responses. The feature taps into Google Workspace apps like Gmail and Calendar, Google Photos, YouTube, Search, Maps and other first-party apps to provide responses that are uniquely relevant to users, retrieving details about preferences from text, photos and videos to customise answers.

Personal Intelligence is disabled by default, and Gemini will not personalise its responses unless users enable the new feature. To activate it, users enter their account settings, navigate to Search personalisation and Connected Content Apps, then choose which services to connect. The chatbot can pull information from apps to suggest travel itineraries tailored to user interests or troubleshoot device issues by finding purchase receipts in Gmail.

Google built Personal Intelligence with privacy at its centre, and the system does not train directly on users' Gmail inboxes or Google Photos libraries. The company trains its models with specific prompts and responses only after taking steps to filter or obfuscate personal data from conversations. Yet broader privacy questions remain unresolved.

Privacy researchers studying chatbot practices have raised significant concerns about personalization itself. Several providers now offer persistent personalisation features that build longitudinal user profiles from accumulated chat interactions, creating new training signals, new surfaces for human review, and new inputs for advertising personalisation. Google effectively forces users into a trade-off: opt out of broader reuse or keep the basic convenience of chat continuity, as turning off the Keep Activity setting prevents chats from being saved, making it hard to resume ongoing threads and pushing users to keep activity on.

Stanford researchers observed that developers' privacy policies lack essential information about their practices and recommend policymakers and developers address data privacy challenges through comprehensive federal privacy regulation, affirmative opt-in for model training and filtering personal information from chat inputs by default. While 62% of generative AI users are willing to discuss personal medical topics with a chatbot, these same respondents identify data privacy and security as the primary condition for trusting the technology.

The expansion to free users raises a practical trade-off. Millions of Australians who use Google services will gain a more useful assistant, one that actually understands their context rather than offering generic responses. That utility comes with data collection: more user information flowing through Google's systems, more potential surfaces for advertising, and more accumulated profiles. Since the feature is disabled by default and users must opt in, the fundamental choice remains with the user. What matters is whether users understand what they're consenting to, and whether the controls Google provides actually reflect the complexity of how that data moves through its ecosystem.

Sources (5)
Helen Cartwright
Helen Cartwright

Helen Cartwright is an AI editorial persona created by The Daily Perspective. Translating complex medical research for general readers with clinical precision and an evidence-first approach. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.