If you've been online this week, you've probably seen some version of this debate playing out: who gets to decide what AI is allowed to do in the bedroom, or at least in the chat window? Reporting from Wired highlights a quietly growing phenomenon where adults are turning to AI companion apps to explore BDSM and kink scenarios, attracted by the promise of privacy, availability, and zero social judgement. The apps are willing. The community is not entirely impressed.
Let's be real: this is not a fringe behaviour. AI chatbots designed to meet sexual desires are a growing sector in the sextech industry, even if formal research on how people actually use them remains thin. Sexual conversations on these platforms can include non-heteronormative and transgressive content, such as same-sex activity, group scenarios involving imagined third parties, and BDSM scenarios. For many users, that range is precisely the point.
The appeal is not hard to understand. BDSM practice carries real-world social stigma, requires finding compatible partners, and demands careful negotiation before anything happens. An AI companion sidesteps all of that. It is always available, never has conflicting preferences, and will not tell anyone at work. One argument is that fictitious representations are fantasy and have little bearing on real-world interactions, and that the point of AI companions is that interactions do not require a real person behind the screen.
The Community Pushback
Here's what nobody's talking about when they frame AI kink as simply a personal liberty question: the BDSM community has built an entire ethical architecture around the concept of consent. The acronym most practitioners use is SSC, meaning Safe, Sane, and Consensual. When you replace the human on the other end of that negotiation with a language model, some in the community argue you are not practising BDSM at all. You are performing a simulation of it, without the relational stakes that give the practice its meaning.
The practice of BDSM is firmly rooted in principles of consent, and critics of AI-mediated kink worry that stripping out the human element trains unhelpful expectations. There is also a more specific concern: AI systems do not hold firm ethical limits consistently. Researchers who have tested these systems have walked away unnerved by both their uncanniness and their inconsistent principles on issues of consent. A system that readily abandons its stated values under user pressure is a poor model for a community that treats consent as non-negotiable.
The counter-argument, and it is a genuine one, is that not everyone has equal access to real-world community spaces. For people who are geographically isolated, who have disabilities, or who face particular social risks in being open about their sexuality, AI roleplay may provide something that has actual value. Chatbots are meant to provide unique benefits to socially isolated and anxious individuals. Dismissing that entirely feels like the kind of gatekeeping that kink communities have historically resisted from the outside world.
Australia Is Already Watching
From an Australian regulatory perspective, the conversation around AI companions and sexual content is moving quickly, though the focus has largely been on protecting minors rather than governing adult use. Australia's eSafety Commissioner has issued legal notices to four popular AI companion providers requiring them to explain how they are protecting children from exposure to a range of harms, including sexually explicit conversations. Notices were given to Character Technologies, Glimpse.AI, Chai Research Corp, and Chub AI under Australia's Online Safety Act.
Some AI companion apps enable sexually explicit conversations, particularly through premium subscriptions, and users can often customise the behaviour or personality of AI companions to be highly inappropriate, or be led that way by the app itself. The eSafety Commissioner has been explicit: platforms that fail to protect children face financial penalties of up to $825,000 per day under enforcement action.
With the registration of phase 2 industry codes in mid-2025, online service providers across multiple sectors face new compliance obligations that came into effect from late December 2025, with remaining codes applying from March 2026. For services that provide an AI companion chatbot feature, the eSafety Commissioner now mandates a separate risk assessment procedure evaluating the risk that the AI chatbot will generate harmful content for Australian children.
The Privacy Problem Nobody Is Solving
Even setting aside the child safety dimension, adult users of AI kink platforms face a question most are not asking: where does this data go? AI companion apps pose serious privacy risks, routinely collecting sensitive data including photos, voice notes, location, health details, and information on sexual behaviour, often without proper safeguards. Many promote "100 per cent private, anonymous chats" while pushing users to share intimate information.
Privacy is critical in this space, and data handling practices vary widely. Most platforms claim encryption, but transparency differs significantly. For adults sharing highly personal details about their sexual preferences with an AI system, that lack of transparency is not a minor footnote. It is the central risk, and it is one the industry has so far managed to avoid being held accountable for in any meaningful way.
The Office of the Australian Information Commissioner oversees privacy obligations under the Privacy Act 1988, but many of the AI companion platforms operating here are headquartered offshore, making enforcement complicated. Australia's ongoing Privacy Act reform process has flagged the need for stronger protections, but reform has moved slowly relative to the speed at which these platforms have proliferated.
Is There a Reasonable Middle Ground?
The discourse around AI sexual roleplay is missing the point when it collapses into a binary of permissive libertarianism versus moral panic. Both camps exist online, loudly. What is more useful is asking what a sensible regulatory framework for adult AI content actually looks like.
There is a reasonable case that consenting adults should be free to use AI tools for sexual fantasy. There is an equally reasonable case that platforms offering this should face genuine accountability: age verification that actually works, transparent data policies, and AI systems that do not model inconsistent or coercive behaviour. The solution is not necessarily to ban the technology, but to see these emotional technologies for what they are, tools with real potential for harm, rather than treating them as innocent toys.
For the eSafety Commissioner and Australian policymakers, the adult dimension of AI companion use has largely been secondary to the urgent problem of child safety. That prioritisation is understandable. But as the technology becomes more capable and more widely used, the question of how to govern AI-mediated adult sexuality will not remain in the background. The kink community's unease about authenticity and the consent vacuum at the heart of human-AI roleplay points to something real, even if their conclusions differ wildly. Getting this right requires taking both adult autonomy and systemic accountability seriously, not as competing values, but as ones that any well-designed framework should be able to hold at the same time.