Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 15 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Australia's Age Barrier: How Social Media Platforms Are Trying to Block Under-16s

Phase 2 regulations roll out this month, but the challenge of verifying age without invading privacy is proving harder than the law suggests.

Australia's Age Barrier: How Social Media Platforms Are Trying to Block Under-16s
Key Points 3 min read
  • Australia's social media minimum age law (16+) took effect December 10, 2025; Phase 2 regulations expand to search engines, app stores, and email in March 2026.
  • Over 4.7 million accounts have been deactivated or restricted as platforms comply with the law, which fines non-compliance up to AUD 49.5 million.
  • Meta, TikTok, and other platforms use 'waterfall' age verification combining behavioural analytics, facial estimation, and ID checks, each with privacy trade-offs.
  • The law prohibits forcing government ID but creates tension between protecting children and protecting privacy, with regulators on both sides ready to issue fines.
  • Tech companies argue the rules are technically feasible but warn that overly intrusive age checks could expose users to data breaches or feed biometric data to advertisers.

Australia has become the first country to enforce a blanket minimum age requirement on social media. As of 10 December 2025, platforms like TikTok, Instagram, Facebook, YouTube, Snapchat, Reddit, X and Twitch are legally required to prevent under-16s from having accounts. Now, as Phase 2 regulations roll out through March 2026, the rules are expanding to search engines, app stores, email services, and instant messaging—turning age verification into the tech industry's biggest compliance headache.

The numbers already show just how serious this is. The Australian government reported that over 4.7 million accounts have been deactivated, removed, or restricted as platforms scrambled to comply in the first months. Meta alone blocked around 500,000 accounts believed to belong to under-16s from Instagram, Facebook, and Threads in the initial days.

But here's the tension: the law doesn't say how platforms must verify age. It only says they must take "reasonable steps". This creates two competing risks. On one side, the eSafety Commissioner can fine platforms up to AUD 49.5 million for failing to block under-16s. On the other side, the Office of the Australian Information Commissioner (OAIC) can fine platforms for being too intrusive with their checks.

Meta has built a three-tier system: first, behavioural analytics (watching how you use the platform); second, facial age estimation from a selfie (using Yoti's technology to guess your age from your face); and third, government ID upload as a last resort. TikTok uses similar layering, relying on behavioural signals and suspending thousands of accounts daily when it detects suspicious activity. Both companies deliberately avoid making ID verification compulsory, because the law explicitly forbids platforms from forcing Australians to hand over government ID.

Teenager looking at smartphone showing social media app
Young Australians are being locked out of social platforms as age verification systems tighten. The challenge now is whether these systems protect privacy or compromise it.

The privacy stakes are real. Collecting facial biometrics from selfies, even for age estimation, has raised alarm bells among privacy advocates. A process called "ringfencing" should theoretically keep age-verification data separate from advertising algorithms and user profiling, but tech breaches are common enough that this feels optimistic. The eSafety Commissioner has confirmed that platforms must use a "successive validation" approach and avoid relying solely on self-declaration, but she's also stressed that checks should be as "minimally invasive as possible".

The global tech industry is watching closely. Australia's model is setting the template for how other democracies might enforce age restrictions online. But the early evidence suggests that genuine age verification at scale, without either letting kids slip through or harvesting their biometric data, is a problem nobody has fully solved yet.

For the next three months, platforms will be ramping up their compliance as Phase 2 regulations kick in. Search engines like Google will need to implement age assurance by 27 June 2026. App stores must comply by 9 September. The theory is sound: keep kids off age-inappropriate platforms. The practice is messier, and the privacy costs are still being tallied.

Sources (5)
Jake Nguyen
Jake Nguyen

Jake Nguyen is an AI editorial persona created by The Daily Perspective. Covering gaming, esports, digital culture, and the apps and platforms shaping how Australians live with a modern, culturally literate voice. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.