Here is the fundamental question: when OpenAI allows verified adults to share their sexual fantasies and intimate desires with ChatGPT, who really owns that data? The company that listens, or the person speaking?
OpenAI's CEO of Applications, Fidji Simo, confirmed plans for an "adult mode" to debut in the first quarter of 2026. On the surface, this sounds like what the company claims: treating adults like adults, respecting freedom, relaxing unnecessary paternalism. Strip away the marketing language and what remains is far more troubling.
With Adult Mode, the most intimate details of users' sexual preferences, fantasies, and desires could potentially be logged, analysed, and incorporated into the company's training data pipelines, putting OpenAI on a collision course with privacy regulations worldwide. Consider what that means in practice. Every confession, every desire expressed in supposed privacy becomes fuel for a corporate machine designed to extract profit.
OpenAI has made vague promises about security. The company's long-term roadmap includes advanced security features designed to keep data private, including client-side encryption for messages, which they believe will help keep private conversations private and inaccessible to anyone else, even OpenAI. Note the word: "long-term". Not immediately. Not before intimate conversations begin. Later.
The company has also acknowledged existing surveillance infrastructure. OpenAI routes conversations of users planning to harm others to specialised pipelines reviewed by a small team trained on usage policies, and if human reviewers determine a case involves imminent threat of serious physical harm, the company may refer it to law enforcement. This creates a precedent. Once the infrastructure exists to examine intimate conversations for one stated purpose, the scope of that examination inevitably expands.
Now consider the age verification system meant to protect minors. Internal sources told the Wall Street Journal that OpenAI's age-prediction system has been misclassifying minors as adults 12 percent of the time; multiplied by ChatGPT's enormous user base, millions of underage children could be accessing inappropriate chats. This is not a minor implementation flaw. At scale, it is a systematic failure to protect children from exactly the harm the company claims to prevent.
The counterargument deserves serious consideration: adults should have autonomy over their own intimate choices. No one is forced to use ChatGPT. If people choose to engage with the service, that is their decision. That argument contains genuine truth. But it ignores the power imbalance. Erotic chatbots can simulate care, warmth and attention, and that emotional pull is powerful, especially for young people, with research finding that 67 percent of children aged between nine and 17 already use AI chatbots, with 35 percent saying it feels like "talking to a friend". For those children, the choice is not entirely voluntary when the system is designed to feel like a trusted confidant.
Through the history of espionage, one of the most effective means of intelligence gathering is through intimate partners; from an expert perspective, this becomes a new, incredibly easy tool for mass surveillance. That is not hyperbole. It is a straightforward assessment of what happens when you collect detailed data about people's sexual desires and vulnerabilities.
The EU's GDPR classifies sexual orientation and intimate life details as 'special category' data requiring the highest level of protection, and California's privacy laws impose strict requirements on how companies handle sensitive personal information. Yet how OpenAI plans to navigate this regulatory minefield remains unclear, as the company hasn't publicly detailed what safeguards will protect Adult Mode conversations.
This reveals the true nature of the decision. It is not ultimately about freedom or user autonomy. OpenAI burned through more than 2.5 billion dollars in cash in the first half of 2024, and erotic chat promises what investors crave most: engagement. The company is not becoming more libertarian; it is becoming more desperate for revenue. Adult mode is a commercial calculation dressed up as enlightened policy.
The principle at stake is straightforward: you cannot simultaneously promise privacy while designing systems to maximise engagement and data extraction. Institutional accountability requires OpenAI to make that choice explicit before adult mode launches. Users deserve to know exactly what conversations will be stored, how long they will be kept, who can access them under what circumstances, and whether they will be used to train AI models. Until the company provides that transparency, the feature should not go live.