From London: The boardroom pressure to deploy artificial intelligence is now all but irresistible. Finance teams want it to cut costs, marketing departments want it to generate content, and IT leaders face the uncomfortable reality that their organisation's competitors are not waiting. But as Australians have been discovering at considerable expense, moving fast on AI without moving carefully on security is a gamble that increasingly ends badly.
The numbers are sobering. According to the Australian Cyber Security Centre's annual threat report for 2024-25, the average cost of cyber crime jumped 14% to $56,600 for small businesses, 55% to $97,000 for medium businesses, and a staggering 219% to $202,700 for large organisations. That trend is being driven, in significant part, by AI tools that open new attack surfaces while companies are still writing the rulebook for how to close them.
Reporting by ZDNet on enterprise security practices highlights what experienced security leaders already know: there is no shortage of AI ambition inside organisations, but security discipline rarely keeps pace. The publication outlines five areas where businesses most commonly fail when rolling out AI, and the pattern is consistent with findings from cybersecurity researchers across the industry.
Most organisations lack adequate monitoring and governance over their AI model behaviour, training data integrity, and agent authentication systems. That visibility gap is not a technical inconvenience; it is a structural vulnerability. AI agents are scaling faster than some companies can see them, and that visibility gap is a business risk. The problem compounds quickly when employees bring their own AI tools into the workplace without IT's knowledge.
Research from Zylo's 2025 SaaS Management Index found that 77% of IT leaders discovered AI-powered features or applications operating without IT's awareness. This so-called shadow AI problem extends beyond ChatGPT use on personal devices. Teams across enterprises quietly deployed private or third-party large language models outside official oversight, and by 2026 these shadow models represent a significant and largely invisible attack surface, introducing unmonitored data flows, unknown training retention, and inconsistent access controls.
The threats themselves have also changed character. The most common AI security risks in enterprises include adversarial machine learning attacks, data poisoning, prompt injection in large language models, and supply chain attacks. These threats exploit vulnerabilities unique to AI systems, such as manipulation of training data, model corruption, and unauthorised access by AI agents or APIs. In June 2025, researchers discovered what became known as EchoLeak: a vulnerability that exposed sensitive Microsoft 365 Copilot data without any user interaction, bypassing human behaviour entirely by manipulating how Copilot interacts with user data.
For Canberra, the implications extend beyond individual business losses. Security agencies from the United States, Australia, New Zealand, and the United Kingdom jointly authored guidance on AI data security, published in May 2025, reflecting a recognition that the risks are systemic rather than confined to any single sector. Australia's own signals directorate has co-authored joint guidance with CISA on integrating AI into critical infrastructure, acknowledging that AI integration into operational technology environments introduces significant risks, such as process models drifting over time or safety-process bypasses, that owners and operators must carefully manage.
Advocates for rapid AI adoption make a legitimate counter-argument that excessive caution carries its own costs. In an era when Australian firms compete with international counterparts who have already embedded AI deeply into their operations, delayed adoption translates directly into lost productivity and market share. A report from Microsoft and IDC found that for every dollar organisations invest in generative AI, they are realising an average return of $3.70. From this perspective, security concerns that slow deployment are not neutral; they are a competitive handicap.
There is also a fairness dimension worth acknowledging. Small and medium enterprises, which lack dedicated security teams, face a genuine dilemma: the AI tools their larger competitors use freely may be genuinely inaccessible if the compliance burden is set too high. Regulatory frameworks designed for multinationals can crowd out exactly the smaller operators who stand to benefit most from productivity-enhancing technology.
But the pragmatic middle ground here is not difficult to locate. Experts say that to be successful and secure with AI, businesses must first establish clear guidelines, educate and train their employees, and then grant access. That sequencing, governance before access, rather than governance after incident, is the consistent message from practitioners who have done this well. Organisations that treat securing AI agents not as a constraint but as a competitive advantage, built on treating AI agents like humans and applying the same zero-trust principles, are outperforming those that treat security as an afterthought.
The Australian Cyber Security Centre's guidance on using AI systems securely and the Australian Signals Directorate's broader Essential Eight framework provide a practical starting point for any organisation that has not yet formalised its AI governance posture. The investment in getting this right upfront is, by any measure, far cheaper than explaining a six-figure breach to the board after the fact.