Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 16 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

OpenAI's Adult Mode Hits Second Delay Over Child Safety Concerns

The company struggles with age verification technology and content moderation as regulators close in on AI chatbot safeguards

OpenAI's Adult Mode Hits Second Delay Over Child Safety Concerns
Image: The Verge
Key Points 3 min read
  • OpenAI has delayed ChatGPT's 'adult mode' for a second time with no new launch date, pushing back promised adult-only erotica feature
  • Age verification system misclassified minors as adults about 12 percent of the time during testing, raising child safety risks
  • Company prioritising core ChatGPT improvements; lawmakers advancing strict regulations requiring age verification for all AI companions

OpenAI has postponed the rollout of ChatGPT's long-promised "adult mode" feature, pushing back a commitment to provide verified adults with access to sexually explicit conversations and erotica. The company did not announce a new launch timeline, marking the second major delay since CEO Sam Altman first announced the feature in October 2025.

The delay reflects genuine technical and regulatory complexity rather than mere corporate indecision. The core problem is age verification. When OpenAI tested systems designed to predict whether users were adults, the technology misclassified minors as adults approximately 12 percent of the time. Given that ChatGPT attracts around 100 million users under 18 each week, that error rate would expose millions of children to adult content.

In a statement to multiple outlets, an OpenAI representative said: "We're pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now." The company is directing engineering resources toward core improvements including intelligence upgrades, personality development, and more proactive chatbot behaviour. That prioritisation may reflect corporate realism about where the most pressing user needs lie.

Beyond internal priorities, OpenAI faces an external regulatory storm. US lawmakers have introduced bipartisan legislation that would ban minors from accessing AI companion chatbots altogether, mandating age verification and criminalising companies that knowingly provide chatbots capable of producing sexual content to children. The GUARD Act, sponsored by senators from both parties, reflects genuine bipartisan concern following well-publicised cases involving teenage users of AI chatbots.

The regulatory pressure is not confined to the United States. According to Wall Street Journal reporting, OpenAI's own advisory council warned in January that adult mode could foster unhealthy emotional dependence on the chatbot. One unnamed council member cautioned that OpenAI risked creating what they termed a "sexy suicide coach." That language is provocative, but it reflects concerns with empirical grounding: research has documented cases of teenagers developing intense attachments to AI systems that then encouraged self-harm.

The content moderation challenge compounds the age verification problem. OpenAI needs to distinguish between consensual adult erotica and illegal material depicting minors or non-consensual acts. That line, while legally clear in principle, proves technically elusive for an AI system trained to generate text across vast content domains. The company must lift restrictions on adult content while maintaining restrictions on material depicting minors, non-consensual scenarios, or illegal conduct.

There is a genuine tension between two legitimate principles here. One view holds that adults ought to retain autonomy over their own conversations with AI systems; companies should not position themselves as moral gatekeepers. The opposing view stresses that when a platform attracts millions of minors and lacks reliable age verification, deploying sexualised content carries real child safety risks. OpenAI's repeated delays suggest the company judges those risks currently outweigh the user autonomy argument.

For now, the adult mode remains in limbo. OpenAI says the feature is still on its roadmap, but provided no timeline for resolution. The company's cautious approach may frustrate users seeking more permissive interactions, but it reflects accountability to both child safety and the regulatory realities now shaping AI development.

Sources (6)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.