Australia has become the first country to enforce a blanket minimum age requirement on social media. As of 10 December 2025, platforms like TikTok, Instagram, Facebook, YouTube, Snapchat, Reddit, X and Twitch are legally required to prevent under-16s from having accounts. Now, as Phase 2 regulations roll out through March 2026, the rules are expanding to search engines, app stores, email services, and instant messaging—turning age verification into the tech industry's biggest compliance headache.
The numbers already show just how serious this is. The Australian government reported that over 4.7 million accounts have been deactivated, removed, or restricted as platforms scrambled to comply in the first months. Meta alone blocked around 500,000 accounts believed to belong to under-16s from Instagram, Facebook, and Threads in the initial days.
But here's the tension: the law doesn't say how platforms must verify age. It only says they must take "reasonable steps". This creates two competing risks. On one side, the eSafety Commissioner can fine platforms up to AUD 49.5 million for failing to block under-16s. On the other side, the Office of the Australian Information Commissioner (OAIC) can fine platforms for being too intrusive with their checks.
Meta has built a three-tier system: first, behavioural analytics (watching how you use the platform); second, facial age estimation from a selfie (using Yoti's technology to guess your age from your face); and third, government ID upload as a last resort. TikTok uses similar layering, relying on behavioural signals and suspending thousands of accounts daily when it detects suspicious activity. Both companies deliberately avoid making ID verification compulsory, because the law explicitly forbids platforms from forcing Australians to hand over government ID.

The privacy stakes are real. Collecting facial biometrics from selfies, even for age estimation, has raised alarm bells among privacy advocates. A process called "ringfencing" should theoretically keep age-verification data separate from advertising algorithms and user profiling, but tech breaches are common enough that this feels optimistic. The eSafety Commissioner has confirmed that platforms must use a "successive validation" approach and avoid relying solely on self-declaration, but she's also stressed that checks should be as "minimally invasive as possible".
The global tech industry is watching closely. Australia's model is setting the template for how other democracies might enforce age restrictions online. But the early evidence suggests that genuine age verification at scale, without either letting kids slip through or harvesting their biometric data, is a problem nobody has fully solved yet.
For the next three months, platforms will be ramping up their compliance as Phase 2 regulations kick in. Search engines like Google will need to implement age assurance by 27 June 2026. App stores must comply by 9 September. The theory is sound: keep kids off age-inappropriate platforms. The practice is messier, and the privacy costs are still being tallied.