Google is rolling out a significant expansion to its Fitbit health platform in the coming weeks. US-based users will soon be able to upload their medical records directly into Fitbit's app, giving its AI health coach access to lab results, medications, and visit history alongside fitness data. On the surface, it's an appealing concept: more context means more personalised advice. But the move raises hard questions about data security, informed consent, and whether centralising medical information with a company built on advertising is actually in users' interests.
The medical data will be fed directly to the Fitbit personal health coach, which is a Gemini-powered AI assistant integrated into the Fitbit app. Once users link medical data, they can ask questions like "How can I improve my cholesterol?" and the coach can provide health summaries and recommendations. The feature will arrive first in a Public Preview for US users in April.
To use this feature, users face a significant friction point. Since medical data is sensitive information, integrating it into Fitbit requires users to provide an ID and a selfie. Users can either search for their healthcare provider and link their patient portal, verify their identity with CLEAR for the app to search for records on their behalf, or the app will automatically locate and sync records across different providers using IAL2-certified standards. This sounds convenient, but it also means Google is collecting facial and photographic identification data alongside medical histories.
The infrastructure question
Google has partnered with identity service CLEAR and digital health platform b.well to enable this integration. Google says medical records are securely stored with Fitbit and users have control of their data; medical records, like other health data in Fitbit, are not used for ads. This commitment is important, but history suggests it's worth scrutiny.
Mozilla Foundation researchers have criticised Fitbit for collecting extensive information on users, combining it with data from third-party data sources, and targeting ads based on that data. Economists from major European universities noted that "the combination of Fitbit's health data with Google's other data creates unique opportunities for discrimination and exploitation of consumers in healthcare, health insurance, and other sensitive areas." Whether Google sells medical data directly for ads, the company's broader data ecosystem creates risks that current policies don't address.
Accuracy and liability gaps
There's a technical dimension often glossed over in promotional messaging. Google acknowledges that the medical record navigator uses generative AI, which may sometimes lead to incomplete, out-of-date or clinically inaccurate or misleading information. The limitations are real. Only lab documents are supported and among these, blood labs work best; misinterpretation of abbreviations, missing context like provider notes, and the need for professional interpretation of ranges are all potential limitations.
Google explicitly states this is not intended to diagnose, treat, or prevent any medical condition and should not replace professional medical advice, and users should consult their healthcare professional before making changes concerning their health. For a consumer using an AI coach to make health decisions, that disclaimer carries weight but limited protection.
Data use in research
Users opting into the Fitbit Labs medical record navigator should know how their data will be used. Data is de-identified, but complete anonymity is not guaranteed and may be shared internally, with research partners, and for AI training. This is common practice in research, but it matters for informed consent. Users uploading sensitive medical records might reasonably expect them to be used only for improving the product, not for training broader AI systems.
The historical context
Fitbit's privacy track record casts a shadow over any new data-collection initiative. In 2021, health data for over 61 million fitness tracker users, including both Fitbit and Apple, was exposed when a third-party company that allowed users to sync their health data did not secure the data properly. That breach didn't involve Fitbit directly, but it illustrated how sensitive health data ripples through third-party ecosystems beyond a company's control.
European regulators have raised more recent concerns. The popular health and fitness company, acquired by Google in 2021, forces new users of its app to consent to data transfers outside the EU; users aren't provided with a possibility to withdraw their consent and have to completely delete their account to stop illegal processing. These constraints suggest structural issues with how Google approaches user control over health data.
What actually happens next?
The real risk isn't necessarily dramatic. It's incremental. Google isn't likely to sell medical records to insurers tomorrow. But the company is building a system that continuously expands the scope of health data it controls, integrates it with behavioural data collected across its services, and uses it for purposes that shift over time. Privacy policies can change. Business models can evolve. Once users hand over medical records to Fitbit, reversing that decision is costly: Fitbit's privacy policy states the only way to withdraw consent is to delete an account, which means losing all previously tracked workouts and health data.
The question for individual users isn't whether the feature is useful right now. For some, personalised health coaching informed by medical history clearly has value. The question is whether that value justifies giving Google a deeper window into health status, identity, and medical vulnerability. That's a calculation each person has to make themselves. But they should do it with eyes open to the company's data practices and the gaps in existing privacy safeguards.