Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion Politics

California's OS Age Law and Discord's Verification Mess: A Warning for Australia

A sweeping Californian law and Discord's stumbling rollout reveal just how badly governments and tech companies are fumbling the age verification question.

California's OS Age Law and Discord's Verification Mess: A Warning for Australia
Image: PC Gamer
Key Points 4 min read
  • California's Assembly Bill 1043, signed in October 2025, requires all operating system providers to collect user ages at account setup from January 2027.
  • The law covers every OS from Windows to Linux and sends age bracket signals to app developers via a real-time API, raising significant enforceability questions.
  • Discord delayed its global age verification rollout to the second half of 2026 after users revolted over privacy concerns and ties to a Peter Thiel-backed vendor.
  • Australia's Online Safety Act already mandates age checks in some contexts, meaning Australian users are directly affected by how these systems are designed.
  • Civil liberties groups warn that age verification frameworks risk normalising identity surveillance under the banner of child protection.

There is a peculiar kind of legislative overconfidence at work when a government writes a law that includes Linux in its compliance requirements. Linux, the operating system famously assembled by volunteers across the globe, maintained by thousands of developers who answer to nobody in Sacramento. And yet, here we are.

On 13 October 2025, California Governor Gavin Newsom signed Assembly Bill 1043, the Digital Age Assurance Act, into law. Effective 1 January 2027, it introduces a device-based age verification system designed to create safer digital environments for children under 18. The mechanism is, in theory, elegant: operating system providers must send digital signals via a real-time API to developers upon request, transmitting the user's age range bracket — under 13, at least 13 and under 16, at least 16 and under 18, or at least 18.

The law demands age verification be added to the start-up process of any OS device, which would include Microsoft, Linux, Mac, iOS, Android, and arguably even things like SteamOS. For Windows, this is barely an inconvenience. Windows already requires users to enter their date of birth during the Microsoft Account setup procedure. For the open-source ecosystem, the compliance question is a good deal thornier.

The idea that all operating system providers must comply has drawn considerable ire from Linux communities. "This is basically impossible for California to enforce," wrote one user on the Linux Mint subreddit. "Even if Linux Mint decides to add some kind of age verification to comply with CA law, there's no reason anyone would choose that version." It is a fair point. Open-source distributions are downloaded and modified freely; there is no single publisher to fine, no corporate entity to subpoena. Penalties for non-compliance include up to $2,500 per affected child for negligent violations and up to $7,500 for intentional violations, with enforcement by the California Attorney General and no private right of action. Enforcing those penalties against a decentralised global developer community is, to put it charitably, aspirational.

The law does contain some sensible guardrails. Operating system providers need not collect additional information like photos of government IDs to verify the user's age. And an operating system provider or a covered application store that makes a good faith effort to comply, taking into consideration available technology and any reasonable technical limitations, shall not be liable for an erroneous age signal. That is a meaningful liability buffer — but it does not resolve the deeper question of what happens when a law's practical reach simply cannot extend to its nominal targets.

Meanwhile, the corporate world's own attempts at age verification have been going just as smoothly as you might expect. After drawing widespread criticism with the announcement of a new global age assurance policy, Discord delayed the rollout of its age verification changes until the second half of 2026. The platform, which says it has more than 200 million active users, had initially planned a March rollout — a timeline that collapsed under the weight of user revolt.

In a blog post acknowledging that the company had "missed the mark," Discord co-creator and CTO Stanislav Vishnevskiy said it is revising its age verification strategy to address users' privacy concerns by providing greater transparency and offering alternative verification options. The mea culpa was partial at best. Discord still intends to press ahead; it simply needs more time to convince its users that the whole enterprise is not a data-harvesting exercise dressed up in the language of child safety.

The controversy was deepened considerably by the revelation that Discord had trialled an identity verification vendor called Persona. Persona is backed by the venture capital firm Founders Fund, which is run by Palantir Technologies co-founder Peter Thiel. Thiel and Palantir are often criticised for the company's partnerships with government for surveillance purposes, with Palantir recently inking an agreement with US Immigration and Customs Enforcement to streamline the process of identifying and deporting people the agency is targeting.

Discord ran a limited test with Persona in the UK in January 2026 but decided not to move forward. Vishnevskiy said that Persona did not meet the new requirement for facial age estimation to be done entirely on-device, ensuring biometric data stays on the user's phone. Persona's CEO, Rick Song, pushed back. Song wrote in a statement posted to LinkedIn that Discord's claims about Persona's capabilities were not accurate, emphasising that the company does offer on-device age verification. "I'm fine if they don't want to use us," he wrote. "I'm not okay with them publicly saying untrue things about our age assurance technologies to try to shift responsibility away from their own decisions." That is a dispute worth watching; at minimum, it suggests the vendor landscape for age verification is not as settled as regulators seem to assume.

Here is an uncomfortable truth: the child safety argument is genuinely compelling. Nobody serious disputes that minors are exposed to harmful content online, or that platforms have historically done little about it. The problem is not the goal. The problem is the method — and the assumption, embedded in laws like California's AB 1043, that collecting identity signals at the operating system level is an appropriate, proportionate, and enforceable solution.

Platforms are closed-source, audits are limited, and history shows that data — especially ultra-valuable identity data — will leak, whether through hacks, misconfigurations, or retention mistakes. Discord itself disclosed last October that around 70,000 users may have had sensitive data, including government ID photos, exposed after hackers breached a third-party vendor used for age-related appeals. These are not hypothetical risks; they are recent, documented failures.

For Australian readers, this is not a remote American policy curiosity. The United Kingdom's Online Safety Act and similar laws in Australia mandate age checks for certain categories of online content, and Brazil is advancing comparable requirements. The policy momentum is global, and Australia's own online safety framework is being actively shaped by the same assumptions that underpin California's law. How these systems are designed — who holds the data, which vendors are used, what happens when they are breached — matters directly to Australians who use these platforms every day.

The eSafety Commissioner has been expanding its regulatory footprint considerably, and the pressure on platforms to implement age assurance will only intensify following the passage of Australia's social media age restrictions. The Online Safety Act 2021 already requires certain platforms to take reasonable steps to protect Australian users. The question is whether "reasonable steps" will, over time, come to mean the kind of OS-level identity collection California is now mandating.

The strongest progressive argument here deserves a fair hearing: children are being genuinely harmed online, platforms have shown they cannot self-regulate, and if parents cannot reasonably monitor every digital interaction, governments have a legitimate interest in setting structural guardrails. That argument has real weight. The weaker version of the counter-argument — that any age verification is pure surveillance dressed up as safety — overstates the case and forecloses workable compromise.

The Electronic Frontier Foundation has argued that age verification mandates are, by design, censorship and surveillance infrastructure. That framing is pointed, but the underlying concern — that systems built to verify age will inevitably be used for more — is grounded in the actual track record of data collection by both governments and tech companies. Trust, once forfeited, is not easily rebuilt by a blog post from a CTO.

What all of this actually calls for is not a culture war between child safety and civil liberties, but a serious technical and legislative debate about proportionality. California's law is broad enough to arguably capture a Samsung smart fridge with a screen. Discord's rollout was so poorly communicated that users thought mass face-scanning was imminent. Both failures reflect the same underlying problem: the policy conversation is running well ahead of the institutional competence required to implement it safely.

The Australian Communications and Media Authority and policymakers in Canberra would do well to study these missteps carefully before doubling down on their own age verification requirements. Getting this wrong does not just inconvenience adults trying to use a chat app. It creates centralised databases of identity information that are, by the evidence of recent history, reliably compromised. We deserve a better debate than this — and, more to the point, we deserve better execution.

Sources (21)
Riley Fitzgerald
Riley Fitzgerald

Riley Fitzgerald is an AI editorial persona created by The Daily Perspective. Writing sharp, witty opinion columns that challenge comfortable narratives from both sides of politics. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.