In a move that echoes one of the internet's most successful privacy victories, the creator of Signal is now deploying similar encryption technology across Meta's AI systems. Moxie Marlinspike, the cryptographer who built Signal's protocol and later saw it adopted by WhatsApp, announced that his new encrypted AI chatbot called Confer will power conversations on Meta AI, potentially protecting the chat histories of millions of users from corporate surveillance.
The technical achievement is genuine. Confer encrypts both prompts and responses so companies and advertisers can't access user data. The tool uses smart math to ensure that even though the compute-intensive process of running the AI still happens on a server in the cloud, the only person who can access the unscrambled details of that computation is you, the user. The system relies on two core technologies: passkeys that generate encryption keypairs stored only on user devices, and trusted execution environments on servers that prevent even administrators from accessing data.
This matters because the current state of AI privacy is bleak. When you interact with existing chatbots, your data is not held privately, especially for the most useful models, which are closely guarded by AI companies and far too big to run on a local machine anyway. When using ChatGPT, Gemini, or Claude, you hand over your thoughts in plaintext to companies that store every word you type, and they can be forced by courts to preserve and hand over your logs. A court order in May 2025 required OpenAI to preserve all ChatGPT user logs, including deleted chats, and OpenAI CEO Sam Altman has admitted that even therapy sessions on the platform may not stay private.
Marlinspike's concern extends beyond legal discovery. He argues that AI chat logs reveal how you think, and could be the key for a profoundly more powerful and manipulative form of advertising, which is inevitably coming soon, "as if a third party pays your therapist to convince you of something".
Yet the announcement comes at a peculiar moment. Even as Meta moves to encrypt AI conversations, it is simultaneously dismantling encryption elsewhere. Meta has announced plans to discontinue support for end-to-end encryption for chats on Instagram after May 8, 2026. When Instagram's encryption sunsets, Meta will regain the technical ability to scan and act on the content of users' DMs, reopening the door to automated content moderation, AI-powered scam detection, and easier compliance with law enforcement requests.
This contradiction matters. The New York Times noted Meta's repeated privacy problems over the years, including huge settlements tied to facial recognition, and that the company loosened some of its internal privacy checks earlier this year, and Meta AI smart glasses have been sending intimate videos to human moderators. Messages sent to Meta AI are processed on the company's servers to generate responses and maintain context, falling outside WhatsApp's standard end-to-end encrypted user-to-user model, and users may disclose sensitive details without knowing how long the data is stored or how it may be used.
There is a legitimate counterargument. Brian Long, CEO of Adaptive Security, argues that the calculus Meta is making reflects a necessary course correction, because while companies have leaned into privacy, it has also led bad actors to run scams and attack consumers. Law enforcement and child safety advocates have long argued that encryption creates obstacles to detecting illegal activity.
Whether Marlinspike's technology can overcome that trade-off remains uncertain. It is still early days, and there is plenty we still do not know, including exactly where this will appear first, how much of the system actually stays protected, whether Meta can still see any prompts in some way, or if any part of it will get outside audits, and those details matter a lot as they basically decide whether the whole thing works.
The deeper issue is credibility. Marlinspike's key victory with the Signal protocol was arguably not the creation of Signal itself, but the fact that WhatsApp later used Signal's code to encrypt the chats of billions of users, magnifying its effects far more widely. If history repeats itself and Confer's technology becomes the standard for AI conversations across Meta's platforms, it could be one of the most significant privacy gains of the decade. Yet Meta's simultaneous retreat from encryption on Instagram suggests the company's privacy commitments remain conditional, shifting with safety concerns and regulatory pressure. That inconsistency leaves users and privacy advocates with a legitimate reason to remain cautious, even as the underlying technology offers genuine promise.