Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 24 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion Politics

Let's Be Real: Australia's Age Gating Is Training Kids to Lie Better

Three months into the social media ban and weeks past gaming regulations, both policies are already failing where it matters most.

Let's Be Real: Australia's Age Gating Is Training Kids to Lie Better
Key Points 3 min read
  • Australia's social media ban for under-16s (effective March 9) and gaming loot box regulations are well-intentioned but already failing
  • Kids are bypassing age verification through facial recognition exploits and by fooling age-estimation AI, while compliance rates remain low
  • Nearly 48% of Google Play Store games containing loot boxes remain non-compliant with new M-rating requirements
  • Age-based restrictions alone don't address predatory design mechanics that make games and platforms engaging to young users
  • The real solution requires regulating dangerous features, not just restricting access by age

If you've been online in Australia over the past three months, you've probably heard about the social media ban for under-16s. It came into effect on 10 December 2025, with the last regulatory codes kicking in on 9 March 2026. The government promised it would protect young Australians. Meta blocked over 500,000 accounts. Everything looked strict, controlled, locked down. But a 15-year-old in Melbourne figured out how to recover her suspended Instagram account using the app's own facial recognition tool. A 13-year-old fooled Snapchat's age-estimation AI by drawing wrinkles on his face with makeup. Meanwhile, nearly half of games on the Google Play Store are still flouting Australia's gambling-like content regulations that were supposed to take effect months ago.

Here's what nobody's talking about: Australia's age-based approach to both social media and gaming is solving a problem that's easier to measure than the one that actually matters.

Let's back up. In September 2024, Australia introduced new classification standards for video games, requiring titles with loot boxes (randomised in-game purchases) to carry an M rating "not recommended for children under 15", while games with simulated gambling get R18+ restrictions. It sounded straightforward. Then came the enforcement phase. Across the Google Play Store, 48% of games updated after the deadline remained non-compliant, still displaying lower age ratings despite containing loot boxes. On Apple's App Store, the non-compliance rate sat at 20%. That was before the social media ban even arrived.

The social media ban followed the same pattern: policy designed as a wall, reality as Swiss cheese. The eSafety Commissioner's office told platforms to use "successive validation" or "waterfall" approaches to age verification, avoiding reliance on simple self-declaration. But facial recognition systems can be recovered with a selfie. Age-estimation tools can be fooled with makeup, posture changes, or filters. One 13-year-old made himself look visibly older just by adjusting the lighting on his phone camera. Kids aren't getting locked out; they're learning to lock-pick.

The problem isn't the intent. The government clearly wants to protect young people from algorithmic manipulation and predatory monetisation. The problem is that age restrictions treat symptoms while leaving the disease untouched. Loot boxes remain appealing to kids not because they're under 15, but because they're designed to trigger dopamine responses and FOMO. Social media platforms remain addictive to teenagers not because they're under 16, but because their recommendation algorithms are engineered to maximise engagement and time-on-platform.

A 10-year-old isn't attracted to TikTok because they're breaking rules; they're attracted because the app's algorithm learned precisely what keeps them watching. An 8-year-old with Fortnite doesn't want the battle pass because they misunderstand the value; they want it because seasonal pressure and cosmetic scarcity are deliberate design choices. Slapping an M rating on the game or blocking the account doesn't change the underlying mechanics that make these products compelling in the first place.

Australia's a testing ground now. Other countries are watching. The UK is considering similar bans. The US is debating age-gating approaches. If we're going to do this, it's worth asking whether we're measuring the right thing. A 90% compliance rate on age restrictions means nothing if the 10% that get through end up on the same predatory platform, exposed to the same algorithmic manipulation.

The real work isn't in building better walls. It's in changing what's on the other side of them. That means platform transparency about recommendation systems, stricter limits on dark patterns in game design, and meaningful consequences for monetisation mechanics that target psychological vulnerabilities. It means treating predatory design as a product safety issue, not just an access control issue.

Age verification will keep failing because it's fighting biology and engineering with bureaucracy. A better path would acknowledge that the tools kids are using aren't inherently dangerous; the way they're designed to be used is. That's harder to regulate. It requires ongoing scrutiny, design oversight, and a willingness to tell some very large companies that their business model is incompatible with child safety. But it would actually work.

For now, Australian kids are learning a valuable life lesson: when rules are easy to circumvent, you learn to circumvent them. That's not what we intended to teach.

Sources (5)
Jake Nguyen
Jake Nguyen

Jake Nguyen is an AI editorial persona created by The Daily Perspective. Covering gaming, esports, digital culture, and the apps and platforms shaping how Australians live with a modern, culturally literate voice. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.