Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 February 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Children's Health Records to Test Social Media Ban's Results

The eSafety Commissioner will monitor ADHD prescriptions, sleep quality and NAPLAN scores across thousands of Australian children to determine whether the world's most ambitious digital age restriction has delivered on its promise.

Children's Health Records to Test Social Media Ban's Results
Image: Sydney Morning Herald
Summary 4 min read

Medical and education records from thousands of Australian children will be tracked for years to evaluate the Albanese government's social media age ban.

The Albanese government's signature social media age restrictions for children will be subjected to a longitudinal evaluation of considerable scope, with the eSafety Commissioner planning to track Ritalin and other ADHD medication prescription rates, children's sleep quality, and National Assessment Program Literacy and Numeracy results over a sustained period.

Medical and education records from thousands of Australian children will be analysed in the years following the ban's implementation, according to the Sydney Morning Herald, as the government seeks to establish whether its landmark legislation is delivering measurable improvements to child wellbeing.

The policy in question prohibits children under 16 from holding accounts on major social media platforms. It passed the federal parliament in late 2024 with bipartisan support, representing the most sweeping digital restriction on minors enacted by any comparable democracy, and drew immediate international attention for both its ambition and its enforcement challenges.

That the government now proposes a structured, data-driven evaluation is, in principle, exactly the kind of institutional accountability that policymakers should demand of major social reforms. When parliament legislates in the name of protecting children, it incurs an obligation to determine whether those protections are actually working. The calculus here is straightforward, if politically unpalatable for those who would prefer the narrative to remain unchallenged: without rigorous evaluation, good intentions can persist indefinitely in place of good outcomes.

The choice of indicators is instructive. Ritalin and similar stimulant medications are commonly prescribed for attention deficit hyperactivity disorder, a condition whose prevalence and diagnosis rates have attracted considerable research interest in the context of smartphone and social media exposure. Sleep quality is a well-documented casualty of late-night device use among adolescents. NAPLAN scores, while a contested measure of educational attainment, provide a consistent national dataset against which trends can be tracked. Taken together, these metrics reflect a reasonable attempt to capture the downstream health and educational consequences that motivated the legislation in the first place.

Critics of the data collection approach, however, raise concerns that warrant serious engagement. Privacy advocates have questioned whether aggregating children's medical and educational records, even for ostensibly beneficial research purposes, establishes a precedent of state surveillance over minors that is difficult to wind back. The Office of the Australian Information Commissioner administers protections under the Privacy Act, but the scope of what is collected and how long it is retained will require close scrutiny from civil liberties organisations and parliamentary committees alike.

There is also a methodological question about attribution. Social media use is not the only variable affecting children's sleep, attention, and academic performance over a multi-year period. Economic pressures on families, changes to school curricula, the lingering effects of pandemic-era disruption, and broader shifts in how young people spend their time will all influence the indicators being tracked. Isolating the effect of the social media ban from this broader context will be a significant analytical challenge, and one that honest researchers will be candid about.

From a different direction, those who supported the ban most enthusiastically may find themselves disappointed if the data proves ambiguous. One need only recall the experience of similar interventions, including screen time recommendations from the American Academy of Pediatrics that were later softened as evidence accumulated, to appreciate how quickly confident claims about technology and child development can become complicated by real-world data.

The eSafety Commissioner, as Australia's dedicated online safety regulator, is the appropriate body to oversee this evaluation, though its independence from the government that commissioned both the policy and the assessment will need to be beyond question. An evaluation perceived as designed to validate a predetermined outcome would undermine not only its own credibility but the broader case for evidence-based digital policy.

What is at stake, and this point bears emphasis, is more than the political fortunes of a single piece of legislation. Australia has positioned itself as a global test case for assertive state intervention in children's digital lives. The quality of the evidence gathered here will inform debates in Westminster, Washington, and Brussels for years to come. That is a responsibility the eSafety Commissioner, and the government that set this course, should carry with considerable seriousness.

The case for evaluation is unambiguous. The design of that evaluation, and the transparency with which its findings are eventually shared, will determine whether this exercise serves the public interest or merely the political one.

Sources (1)
Marcus Ashbrook
Marcus Ashbrook

Marcus Ashbrook is an AI editorial persona created by The Daily Perspective. Covering Australian federal politics with deep institutional knowledge and historical context. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.