From Washington: In a development that will reverberate across the Pacific, leaked internal documents from Meta reveal the company has been planning to add real-time facial recognition to its Ray-Ban smart glasses, and that it deliberately chose a launch window calculated to minimise public resistance. The feature, internally codenamed "Name Tag", could soon allow anyone wearing a pair of Meta's sleek, bestselling glasses to identify people connected to Meta's platforms simply by looking at them on the street.
The New York Times obtained internal Meta documents showing the company flagged a "dynamic political environment" as an opportunity for the rollout, reasoning that civil society groups it expected to oppose the feature would have their resources consumed by other concerns. An internal document from May 2025 stated the company planned to "launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns." That is a corporate strategy document, not a privacy policy. The gap between those two things is exactly what regulators and ordinary citizens should be worried about.
The commercial context makes the stakes clearer. EssilorLuxottica, Meta's manufacturing partner, has sold over 7 million smart glasses units in the past year. The "Name Tag" feature will let Ray-Ban Meta wearers identify people connected to their Meta accounts or public Instagram profiles through the AI assistant. That means every one of those 7 million pairs of frames becomes a potential identification device pointed at anyone who crosses a wearer's path.
In January 2025, Meta restructured its internal privacy operations, reducing the privacy team's influence over product releases and imposing time limits on how long privacy risk reviews could take. Andie Millan, a director of risk review in Meta's Reality Labs division, told employees the changes would "push the bounds" of Meta's FTC agreement. That FTC agreement is not a trivial document. Meta has been under a privacy consent decree with the FTC since 2012, and agreed to pay $5 billion and operate under stiffer privacy requirements under the 2020 accord with the agency.
The risks are not theoretical. In October 2024, two Harvard students demonstrated with alarming clarity what this technology can already do when paired with off-the-shelf components. AnhPhu Nguyen and Caine Ardayfio, both students at Harvard University, created a tool called "I-XRAY", which integrates facial recognition software PimEyes with Meta's smart glasses to publicly identify individuals. By simply looking at a person, the glasses could retrieve personal details such as names, addresses, and phone numbers from online databases. The students did this using publicly available tools, without any official Meta support, in under two minutes per target.
The Harvard pair demonstrated how Meta's Ray-Ban smart glasses can be modified to identify individuals in public using facial recognition software, a project designed to raise awareness about privacy risks that has sparked concerns over the growing potential for wearable technology to invade personal privacy. They declined to release their code, but were candid about the implications. In an interview with 404 Media, Nguyen acknowledged the risks plainly: "Some dude could just find some girl's home address on the train and just follow them home."
Regulators have moved quickly in the past fortnight. The Electronic Privacy Information Center (EPIC) is urging US regulators to block Meta's reported plans, warning that real-time facial recognition embedded in wearable devices would create widespread privacy harms. In letters sent to the Federal Trade Commission and state privacy enforcers, EPIC called for immediate investigation and enforcement action. The group also contacted nine states participating in the Consortium of Privacy Regulators, including California, Colorado, Connecticut, Delaware, Indiana, Minnesota, New Hampshire, New Jersey, and Oregon, to encourage coordinated enforcement.
A Los Angeles courtroom provided a sharp illustration of the broader social anxiety around the technology this month. During a trial examining the impact of social media on children, a judge ordered Meta team members to immediately remove their Ray-Ban Meta AI glasses from the courtroom. "It is the order of this court that there must be no facial recognition of the jury," warned Judge Carolyn Kuhl.
There is a legitimate case for the technology, and intellectual honesty requires acknowledging it. Among the arguments in favour of Name Tag, supporters point to accessibility advantages: recognising nearby people could help blind or low-vision users identify who is in front of them, follow conversations, or move more safely in social settings. One of the Harvard students noted potential positive use cases, including for dementia patients: "They wear glasses, it just identifies who the name is," said Nguyen. "That could really help a lot of people who struggle recognising faces." These are genuine benefits worth considering, not talking points to be dismissed.
For Australians, the question is not purely academic. Australia's Privacy Act 1988 treats biometric information as sensitive information, and the Privacy Commissioner has already ruled facial recognition technology unlawful when used without consent in retail settings. As recently as September 2025, Privacy Commissioner Carly Kind found that Kmart Australia breached the Privacy Act by using facial recognition technology in 28 stores to deter refund fraud, scanning the faces of everyone entering those stores without notice or consent. The Office of the Australian Information Commissioner has made clear that biometric collection at scale without consent will almost certainly breach Australian law, regardless of the intended purpose.
That regulatory posture sets up a direct collision course if Meta attempts to deploy Name Tag in Australia. Biometric templates and biometric information collected by facial recognition technology are considered sensitive information under the Privacy Act, which imposes heightened requirements around the collection, use, and disclosure of such data. Individuals must consent to the collection of sensitive information unless a narrow exception applies. A stranger on a train cannot meaningfully consent to being scanned by the glasses of someone they have never met.
The legal implications in regions like the European Union are equally decisive, with facial recognition classified as sensitive biometric data. Any function like Name Tag would have to be supported by very solid legal foundations, and obtaining unequivocal consent from passersby or strangers is practically impossible. The Electronic Frontier Foundation put the core problem directly: "Meta cannot possibly obtain consent from everyone, especially bystanders who are not Meta users."
The case for strong privacy regulation here is not simply a progressive concern. Individual liberty, properly understood, includes the liberty to walk through a shopping centre, a protest, or a medical clinic without having your identity extracted by a stranger's glasses and cross-referenced against a corporate database. The erosion of anonymity in public life is a property-rights problem as much as a civil liberties one: your biometric data belongs to you, and no company should be able to harvest it without your knowledge simply because you stepped outside.
At the same time, a flat prohibition on all facial recognition technology would foreclose genuine benefits in accessibility, elder care, and security. The more defensible position is the one Australia's own regulator has articulated: the Privacy Act is technology-neutral, meaning facial recognition technology is not banned, but its use must be consistent with privacy principles. Consent, proportionality, and transparency are the floor, not an aspirational ceiling.
What is most troubling about Meta's documented approach is not the technology itself, but the internal logic behind the timing. A company that calculates when critics will be too exhausted to object, and treats that window as a product launch opportunity, is not making a good-faith effort to operate within the spirit of its regulatory obligations. That calculation, more than any particular feature, is what regulators in Washington, Brussels, and Canberra should examine most closely. The Australian Parliament would do well to watch how this unfolds before wearable facial recognition becomes as common, and as unquestioned, as a smartphone camera.