Meta is facing a class action lawsuit for false advertising related to its AI glasses following reports about the company's use of human contractors to review footage captured from users' glasses. The lawsuit, filed Wednesday in federal court in San Francisco, alleges that Meta's claims about the devices' privacy features have misled users.
The lawsuit comes after a Swedish newspaper reported that subcontractors in Kenya have raised concerns about viewing footage recorded via Ray-Ban Meta glasses. According to Svenska Dagbladet, workers have reported witnessing intimate material, including bathroom visits, sexual encounters and other private details as part of their job labelling objects in videos captured on users' smart glasses.
In the newly filed complaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising. According to the complaint, the marketing of Ray-Ban Meta Smart Glasses included phrases such as designed for privacy and controlled by you, whilst failing to clearly disclose that footage could be reviewed by overseas workers.
Meta made a promise to millions of consumers while knowing full well it could not keep it, said Clarkson Law Firm managing partner Ryan Clarkson. The lawsuit points to an apparent disconnect between Meta's marketing and the actual operation of its AI features, particularly the company's Live AI function. There is no way to use the smart glasses' multimodal features without sharing the captures of your surroundings with the company. Footage that is captured but not stored locally for users, like video when Live AI is in use, can be sent to contractors who help train the company's AI models.
In 2025, over seven million people bought Meta's smart glasses. The footage from those glasses is fed into a data pipeline for review, and users can't opt out. This scale gives the lawsuit potential to affect a substantial portion of the consumer market for smart glasses technology.
Meta's Response and the Privacy Policy Question
A spokesperson for Meta confirmed to Engadget that data from its smart glasses can be shared with human contractors in some cases. The company declined to comment on the claims in the lawsuit. Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you, the spokesperson said. Unless users choose to share media they've captured with Meta or others, that media stays on the user's device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed.
The company's statement hinges on a critical distinction: that data remain local unless users actively share content. Yet the lawsuit contends this distinction is meaningless in practice. Meta's privacy policy doesn't specifically mention the use of human contractors, though it states that such data can be used for training purposes. The case will likely turn on whether buried references in terms of service constitute adequate disclosure of a practice affecting millions.
Broader Regulatory Scrutiny
The news prompted the U.K. regulator, the Information Commissioner's Office, to investigate the matter. The UK's Information Commissioner's Office has opened a formal inquiry into Meta's data handling practices for the smart glasses. This regulatory attention suggests the issue extends beyond individual litigation into questions of whether Meta has met legal obligations around transparency and data protection in multiple jurisdictions.
The lawsuit raises legitimate concerns about disclosure. Consumer technology marketing increasingly emphasises privacy features without always clarifying what data flows occur automatically behind the scenes. Yet a balanced assessment requires acknowledging Meta's point that many AI companies rely on human review of data. The question becomes not whether this practice exists, but whether users understood it was occurring when they purchased the device. The court will need to weigh what constitutes adequate notice against what marketing claims consumers would reasonably expect to inform their purchasing decisions. This tension between accessible consumer marketing and complete technical accuracy remains unresolved across the technology industry.