On March 9, Australia's world-first social media age ban expanded dramatically. Phase 2 of the Online Safety Amendment (Social Media Minimum Age) Act went live this week, pushing age assurance requirements beyond Instagram and TikTok to email services, instant messaging, online gaming, search engines, and app stores. It is a regulatory overreach that only Australia could pull off with such scale, affecting hundreds of platforms and services.
The first phase was already ambitious. Since December 10, 2025, social media platforms have been legally required to prevent users under 16 from creating or maintaining accounts. The enforcement has teeth: platforms face fines of up to AUD 49.5 million for systemic failures. In response, Meta, TikTok, YouTube, and others moved quickly. Meta removed 550,000 accounts in December alone. By mid-January 2026, platforms had collectively deactivated 4.7 million accounts judged to belong to under-16s.
But here's where the story gets uncomfortable. The age verification technology supposed to prevent those under-16 accounts from being created in the first place is failing in ways that should alarm policymakers.
Australia's approach uses three layers of age checking: behavioural analysis (watching how you use the app), biometric estimation (facial recognition), and formal verification using official documents or bank data. The idea is elegant: use whatever method works without forcing government ID collection, which regulators explicitly banned.
The execution has not been elegant. Facial recognition systems, meant to be privacy-friendly, have proven spectacularly unreliable. Eleven-year-olds have been identified as 30 years old, while sixteen-year-olds legitimately old enough to access the platform have been locked out. Tech-savvy teenagers have discovered that drawing fake facial hair with makeup is enough to fool the algorithms. Meta's own selfie video verification system failed to identify a 13-year-old as under 16 when tested.
This creates a paradox. To enforce a law intended to protect children's privacy and safety online, platforms are collecting sensitive biometric data from millions of people. That data is supposedly segregated from advertising and recommendation systems under what regulators call the "Ringfence and Destroy" protocol. But the policy now requires this data collection to expand to email providers, game publishers, and search engines. If the facial recognition cannot distinguish a teenager from an adult, what's the point of collecting that data?
Early effectiveness data is mixed. The eSafety Commissioner is tracking 4,000 families over two years to understand real-world impacts. Initial reports show a 25% reduction in cyberbullying complaints and increased offline activity among younger users. But teenagers have simply migrated to alternative platforms: TikTok-owned Lemon8, Coverstar, and RedShort have seen massive download spikes. Some teens report feeling isolated from peer communication rather than safer.
The fundamental issue is that Australia has mandated a technical solution to what is really a policy and culture problem. Protecting children online matters. But mandating age verification across hundreds of services, when the core technology cannot reliably distinguish an 11-year-old from a 30-year-old, sets the government on a path where regulatory failure is virtually guaranteed.
Legal challenges to the law are scheduled for 2026, with hearings expected to begin in February at the earliest. By the time courts weigh in on whether the government overreached, Phase 2 will be fully embedded across Australia's digital services. The question isn't really whether this policy protects children. It's whether regulators have created something worse: a regulatory apparatus collecting sensitive biometric data from millions of people to enforce rules that the underlying technology cannot actually support.