Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 9 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Education

Australia needs clearer rules on when students can use AI

Universities are rushing to separate learning from assessment, but schools remain dangerously unguided

Australia needs clearer rules on when students can use AI
Image: Sydney Morning Herald
Key Points 3 min read
  • Universities implementing 'two-lane' approach: unrestricted AI in learning, banned in supervised exams
  • Higher education regulator TEQSA enforcing standards; schools have no equivalent oversight body
  • Meta-analysis shows AI can improve learning outcomes, but only when used strategically by trained teachers
  • Current detection methods unreliable; experts warn that policy frameworks alone won't prevent cheating

Australian universities are drawing a pragmatic line: use AI as much as you want to learn, but put it away during assessment.

The University of Sydney has prohibited students from using AI in supervised tasks such as exams and tests from Semester 2, 2025, unless explicitly permitted, whileallowing AI tools for general learning purposes, provided students follow institutional policies on academic integrity. The university calls this the "two-lane" framework:secure assessments conducted in person where AI is restricted, and open assessments allowing productive engagement with AI tools as part of learning.

This reflects broader institutional thinking.TEQSA, the higher education regulator, announced a shift to a regulatory-led framework beginning in 2026, reflecting widespread student use of generative AI and associated risks to assessment integrity. Universities understand the stakes.TEQSA is a regulator with enforcement power and can hold institutions accountable during reviews. That creates real consequences for institutions that fail to manage AI responsibly.

The learning benefits are measurable.A meta-analysis of experimental studies found a medium positive effect of ChatGPT on students' higher-order thinking, thoughimplementation should be scientific and reasoned according to course type and learning model, not arbitrary.

Yet the framework has critical vulnerabilities.Australian researchers warn that traffic-light assessment systems give evaluators false security while potentially disadvantaging students who follow rules; the systems resemble actual traffic lights without enforcement cameras or patrols, lacking any meaningful enforcement mechanism.No reliable AI detection technology currently exists to verify student compliance.

The picture grows grimmer at secondary school level.The schools framework is endorsed by Education Ministers but not enforceable; no body ensures schools actually follow the guidance. This matters enormously.AI in schools is at least as relevant as in universities, yet school students are younger, more vulnerable, and less equipped to critically evaluate AI outputs.

Australia's national approach emphasises responsible use in theory.Australia's national framework promotes responsible and ethical AI use, allowing students to use tools for learning but not to cheat or violate academic integrity. YetNew South Wales temporarily blocked student access to generative AI on school networks due to lack of robust content filters, while running an internal pilot with NSWEduChat.

The honest assessment is that policy frameworks alone will not prevent cheating. Universities have responded by redesigning what they assess and how.Experts recommend building validity into assessment design itself through structural approaches, such as requiring essays under supervision, random questioning during oral exams, or tutor sign-off on lab work. This is labour-intensive and requires educator skill, but it works.

Australian institutions should welcome AI into learning spaces without apology. The technology can assist understanding, reduce anxiety, and improve academic writing. But assessment integrity demands more than good intentions. Universities have the regulatory backing and institutional resources to build AI-resistant assessment design. Schools do not. Until schools gain equivalent oversight and accountability frameworks, the gap between universities and secondary education will only widen, disadvantaging students in the institutions least equipped to manage it.

Sources (8)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.