Here is a question worth sitting with: what is the point of writing safety rules for the internet if the companies they target simply ignore them? Australia is about to find out whether its answer to that question has any teeth.
The country's internet regulator, the eSafety Commissioner, has signalled it is prepared to go after Apple's App Store and Google's search engine if artificial intelligence services fail to comply with new age-restriction codes by 9 March 2026. The warning is pointed: rather than chase down dozens of obscure chatbot providers individually, the regulator is threatening to cut off their distribution at the source. "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the Commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services."
From 9 March, internet services in Australia, including search tools such as OpenAI's ChatGPT and lesser-known companion chatbots, must restrict Australians under 18 from receiving pornography, extreme violence, self-harm, and eating disorder content, or face fines of up to $49.5 million. Those are not trivial sums, and the legislative backbone is real. Under the Online Safety Act 2021, online service providers are obliged to take reasonable steps to design systems that keep Australians safe, including by protecting children from exposure to age-inappropriate content.

The urgency is not abstract. A Reuters review of the 50 most popular text-based AI products, conducted in the week before the deadline, found the compliance picture was bleak. Of the companion chatbots reviewed, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not even have a published email address to report suspected breaches, which is also required under the code. Of the full cohort of 50 products, only nine had rolled out or announced age assurance systems. A further eleven had blanket content filters or planned to block all Australians from their service entirely. That left thirty with no apparent steps taken at all, as reported by iTnews.
The names making progress are mostly the largest platforms. Most large chat-based search assistants, including ChatGPT, Replika, and Anthropic's Claude, had started rolling out age assurance systems or blanket filters, while Character.AI cut off open-ended chat for under-18s. At the other end of the spectrum sits Grok, the chat-based search tool owned by Elon Musk's xAI. Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, according to the Reuters review. xAI did not respond to a request for comment.
The regulator's concern is not merely about access to explicit content. eSafety has stated it is "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage." Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools for up to six hours a day. Internationally, the stakes are already known to be higher. OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it had deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without notifying authorities.
The Context: From Social Media to AI
This regulatory push does not emerge in a vacuum. As of 10 December 2025, age-restricted social media platforms became required to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. Australia became the first country to enforce a nationwide under-16 social media ban, with global regulators watching closely. The AI age-restriction codes represent the next chapter: a recognition that simply banning teenagers from Instagram does nothing if they can migrate to an unregulated AI companion chatbot that is designed to foster emotional dependency.
Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, told Reuters the findings were unsurprising because most AI tools are being designed without a view to potential harms. "It feels as though we're beta testing all of these things for these companies," she said, "and they're trying to see how far society is willing to be pushed." It is a damning characterisation, and one that is difficult to rebut given the compliance data.
The Counter-Argument Deserves Serious Consideration
The counter-argument deserves serious consideration: the civil liberties case against age verification is not frivolous. Age assurance technologies require platforms to collect sensitive personal data, and Australia has already seen significant data breaches at major institutions. The codes do contain privacy protections, requiring providers not to use or disclose Australians' personal information in breach of privacy law, and requiring that age assurance measures be proportionate to the safety objectives. Whether those safeguards are robust enough in practice is a question regulators will need to answer over time, not just on paper.
There is also the practical question of whether gatekeeping through app stores and search engines can work at scale. Apple said on its website it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without specifying the methods. A spokesperson for Google, Australia's dominant search engine provider and the second-largest app store operator, declined to comment. Vague commitments and silence from the two companies that control most of the world's app distribution are not a confidence-inspiring baseline for enforcement.
Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, pointed to a straightforward principle: ultimately, any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them. That is the correct starting point. The question is whether the enforcement architecture can give that principle some force.
Strip away the talking points and what remains is a genuinely hard regulatory problem. Australia has shown commendable willingness to act where other democracies have only debated. The legal framework is credible, the fines are substantial, and the regulator's instinct to target gatekeepers rather than chase individual bad actors is strategically sound. But compliance figures that show three in five major AI products taking no action at all, one week before a legal deadline, reveal that aspiration and implementation remain far apart. The test of this policy will not be the rules themselves; it will be what happens on 10 March.