Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 2 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI and Cybersecurity: Silicon Valley's Promises Are Outpacing the Evidence

As tech giants push AI-powered security tools, researchers warn the code generating the problems may also be selling the cure.

AI and Cybersecurity: Silicon Valley's Promises Are Outpacing the Evidence
Image: ZDNet
Key Points 3 min read
  • Research shows between 45% and 62% of AI-generated code contains security flaws or design vulnerabilities.
  • AI tools allow developers to produce code at a pace that overwhelms security review teams, compounding rather than solving the problem.
  • A key conflict-of-interest question has emerged: should the companies selling AI code generators also sell the AI security tools meant to fix them?
  • Australia's ASD has issued guidance on securely integrating AI, but no binding AI-specific regulation yet exists in Australia.
  • Experts say the answer is not to abandon AI development tools, but to pair them with independent oversight and rigorous governance frameworks.

There is a sales pitch circulating through the technology industry that deserves careful scrutiny. It goes roughly like this: AI will generate most of your software, AI will also introduce security vulnerabilities into that software, and AI will then find and fix those vulnerabilities for you. The companies making this argument are, in many cases, the same ones selling all three products. As ZDNet has reported, the pointed question now being asked in security circles is whether that arrangement resembles a fox guarding the hen house.

The underlying security problem is real and growing. The Australian Signals Directorate's Australian Cyber Security Centre confirmed in its 2024-25 Annual Cyber Threat Report that the prevalence of AI almost certainly enables malicious cyber actors to execute attacks on a larger scale and at a faster rate. That threat is not abstract. The report highlights that cybercrime continues to challenge Australia's economic and social prosperity, with the average self-reported cost of cybercrime for businesses rising by 50 per cent to $80,850.

On the supply side of the equation, the code being produced with AI assistance is itself a growing source of risk. A recent study found that 62% of AI-generated code solutions contain design flaws or known security vulnerabilities, even when developers used the latest foundational AI models. Veracode's research tells a similar story: a comprehensive analysis of over 100 large language models, across 80 coding tasks spanning four programming languages and four critical vulnerability types, found that only 55% of AI-generated code was secure. The problems are structural, not incidental. AI coding assistants do not inherently understand an application's risk model, internal standards, or threat profile, and that disconnect introduces systemic risks including logic flaws, missing controls, and inconsistent patterns that erode security over time.

The velocity problem is perhaps more alarming than the per-line flaw rate. The core issue is throughput asymmetry: AI tools allow developers to generate and ship code at a pace that completely outstrips the capacity of security teams to review, test, and remediate. Veracode's 2026 State of Software Security report, analysing 1.6 million applications, found that security debt, defined as known vulnerabilities left unresolved for more than a year, now affects 82% of companies, up from 74% twelve months ago, with high-risk vulnerabilities jumping from 8.3% to 11.3%. The industry's reflex response is telling: the technology sector's answer to this problem is predictably circular — use more AI to find and fix the vulnerabilities that AI created, through automated remediation tools, AI-powered code review, and intelligent triage systems.

There are legitimate arguments in favour of AI-assisted security tooling, and it would be unfair to dismiss them. Cybersecurity faces a genuine workforce shortage; there is a 4.8 million-worker gap globally and existing teams are drowning in alert fatigue. AI agents that can triage alerts and autonomously block threats in seconds represent a real operational benefit for under-resourced security operations centres. Proponents argue that the alternative, abandoning AI development tools to preserve security simplicity, is neither realistic nor desirable given the productivity gains involved. GitHub's 2024 developer survey shows that 97% of developers have used AI tools, with many organisations now relying heavily on these technologies for rapid prototyping and production releases. That ship has sailed.

The conflict-of-interest concern, however, is worth taking seriously on its merits. In 2025, the US Securities and Exchange Commission's Enforcement Division continued to pursue companies and advisers that overstated their AI capabilities, a phenomenon regulators call "AI washing." When the same platform that writes potentially vulnerable code also sells the tool marketed to audit it, independent verification becomes harder to enforce and commercial incentives become harder to disentangle from genuine security outcomes. Research shows AI-generated code often receives less careful checking than human-written code, creating serious security risks, and developers tend to feel less responsible for AI-generated code and spend less time reviewing it properly.

Australia's regulatory posture reflects this complexity. At the time of writing, there is no AI-specific regulation in Australia; however, there is a patchwork of laws regulating critical infrastructure, privacy, consumer protection, data security and more that all touch on aspects of AI development and use. The Australian Signals Directorate's Australian Cyber Security Centre, in collaboration with the US Cybersecurity and Infrastructure Security Agency and international partners, has released guidance on the principles for the secure integration of AI in operational technology, aimed at helping critical infrastructure owners balance AI's benefits with its unique risks. That guidance is instructive but voluntary.

The pragmatic position here is neither to celebrate AI security tools uncritically nor to treat them as inherently compromised. The evidence points toward a middle path: independent code auditing standards, clear organisational governance over which AI tools are permitted in production environments, and mandatory human review for security-critical components. There are growing calls for AI dependency inventories, such as AI Bills of Materials, to articulate AI dependencies and understand the provenance of models and datasets. That kind of structural transparency is far more useful than trusting any single vendor's claims about their own product's safety record. The question of who guards the guardians is old, but in the context of AI and cybersecurity, it has never been more commercially loaded.

Sources (12)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.