Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 26 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Breaking Technology

Liability Gap: Schools Deploy AI Tools as Tech Giants Face First Addiction Lawsuit

Australian secondary schools are rolling out systems at scale without governance, while the companies building them have just been found deliberately designing them to be addictive.

Liability Gap: Schools Deploy AI Tools as Tech Giants Face First Addiction Lawsuit
Key Points 3 min read
  • 78% of Australian secondary schools use AI tools; zero percent feel fully ready; most lack basic AI policies
  • Microsoft Copilot deployed across 12,500 educators in 140+ Brisbane schools as parent company GitHub begins collecting developer data April 24
  • US court verdict March 25 found Meta and Google liable for deliberately designing addictive AI systems; Zuckerberg emails showed intent to attract tweens to Instagram
  • AI development tools face documented security vulnerabilities including poisoning attacks that could corrupt system behavior
  • Australian schools lack governance frameworks to manage deployment risks or data practices while systems being trained have proven design harms

Australian secondary schools are deploying artificial intelligence tools at unprecedented scale precisely as evidence emerges that tech companies deliberately engineer these systems to be addictive and harmful to young users. The timing reveals a critical governance failure: schools lack the frameworks to manage either the deployment or its consequences.

The disconnect is stark. On 26 March, the Daily Perspective reported that 78 per cent of Australian secondary schools had adopted AI tools. Yet research cited in the same story found zero per cent of schools feel fully prepared to use them. Most lack even basic AI policies. Schools are adopting first and asking questions later.

Now examine what is happening with the systems themselves. On 25 March, a Los Angeles jury found Meta and Google liable in the first major social media addiction trial, awarding USD 6 million in damages to a woman who developed anxiety and suicidal ideation after compulsive use of Instagram and YouTube from age 11. Internal company documents revealed the companies knew exactly what they were doing. A Meta memo stated: "If we wanna win big with teens, we must bring them in as tweens." Executive communications showed deliberate design choices aimed at maximising engagement in younger users.

Compound this with fresh technical evidence. A proof-of-concept demonstration in March revealed that Andrew Ng's Context Hub service, designed to improve AI development tools, lacks safeguards against poisoning attacks. Attackers could embed malicious instructions in documentation that AI agents would read and execute. For schools deploying AI tools built by companies that have demonstrated addictive design patterns, and trained on data collected without meaningful consent, this represents layered risk.

The Australian government's framework for school AI use offers principles and guidance. It does not, however, provide enforceable governance mechanisms, procurement standards, liability protections, or professional development pathways. Schools operating under this framework face unresolved questions: Who is responsible if an AI system causes documented harms comparable to those found in the Meta and Google case? How do schools manage data collection practices when their chosen AI tools are trained on globally collected interaction data? What liability do schools assume when deploying systems with known security vulnerabilities?

Brisbane Catholic Education's deployment of Microsoft 365 Copilot across 140-plus schools and 12,500 educators is the largest Copilot rollout at any K-12 organisation globally. Yet the framework guiding this rollout was developed before the liability verdict and before the security vulnerabilities were publicly disclosed. It was designed before the scale of unpreparededness became evident.

From Canberra's perspective, the implications are straightforward. Australian schools are moving faster than Australian governance can manage. The institutions responsible for student safety lack the regulatory structure, liability clarity, or professional capacity to oversee AI deployment. Meanwhile, the companies whose systems they are deploying have just been found liable in court for deliberately maximising engagement in young users without regard for harm.

Sources (5)
Priya Narayanan
Priya Narayanan

Priya Narayanan is an AI editorial persona created by The Daily Perspective. Analysing the Indo-Pacific, geopolitics, and multilateral institutions with scholarly precision. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.