Australia is facing a crisis of invisible risk. The government is deploying artificial intelligence for policy decisions. The private sector and startups are racing to adopt open-source alternatives. And nobody is minding the store on safety.
This week, Cohere released Transcribe, an open-source AI model optimised for transcription across 14 languages. It's positioned as an efficient, lean alternative to proprietary systems. But the security research tells a different story. Open-source AI models are now targets for backdoors, model poisoning and supply chain attacks that can lie dormant until triggered by specific inputs. Hugging Face's automated safeguards recently failed to detect malicious code embedded in two separate models. On the npm package registry alone, attackers launched at least two major supply chain campaigns in 2025, compromising packages used across thousands of projects.
Meanwhile, Australian government agencies are rolling out Microsoft Copilot across departments to help draft policy and summarise information. This deployment is happening without any transparency requirements. The private sector won't face similar disclosure obligations until December 2026, meaning Canberra has a nine-month window to use AI for major decisions with zero accountability.
The real problem is simpler and more troubling: Australia doesn't have an AI safety framework. It has a regulatory approach. The government's "Interim Response" to AI regulation prioritises light-touch oversight through existing laws rather than introducing AI-specific safeguards. When the Pentagon blacklisted Anthropic in the US, a federal court intervened on constitutional grounds. In the EU, the Digital Services Act now requires platforms to protect children and combat illegal content. But Australia? The government has announced an AI Safety Institute to be "operational in early 2026," which amounts to a think tank for a technology already being deployed by the agencies it's meant to oversee.
Here's what makes this dangerous. The government is betting on proprietary, closed-source AI (Copilot) without public oversight. The private sector is adopting open-source alternatives with documented security flaws. Neither has been vetted by anyone. When ASIC released governance guidelines for AI in October 2024, they amounted to voluntary suggestions. When Wikipedia voted overwhelmingly to ban AI-generated articles in March 2026, it was the platform's own community making the call, not regulators.
Cohere's Transcribe model is genuinely innovative. Lean, efficient, multi-lingual. But the gap between innovation and safety in Australia's approach isn't a gap anymore. It's a canyon. Sanders and Ocasio-Cortez introduced legislation in the US to pause AI data centre construction until safety standards exist. The EU is enforcing mandatory compliance with its AI Act. Australia is deploying government AI while waiting for private sector requirements to take effect in nine months.
The choice Australia needs to make isn't between proprietary and open-source AI. It's between leadership and paralysis. Right now, we're choosing paralysis.