Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

AI Speed Is Outpacing Software Security, Major Report Warns

A landmark study of 1.6 million applications finds more vulnerabilities being created than fixed, as AI-accelerated development widens the remediation gap.

AI Speed Is Outpacing Software Security, Major Report Warns
Image: The Register
Key Points 4 min read
  • Veracode's 2026 State of Software Security report found 82% of organisations now carry security debt, up from 74% the previous year.
  • High-risk vulnerabilities, those both severe and likely to be exploited, rose 36% year-on-year from 8.3% to 11.3% of applications tested.
  • AI-generated code is increasing technical complexity and making it harder for security teams to remediate existing flaws quickly enough.
  • The report notes that detection is modestly improving, with open-source vulnerability rates dropping from 70% to 62%, but the remediation backlog continues to grow.
  • Veracode warns that 'transformational change' is required, though what form that should take remains an open question for the industry.

For years, the software industry has promised that better tooling and artificial intelligence would tame the chronic problem of insecure code. A major new report suggests the opposite is happening. Security debt is growing faster than it is being paid down, and the breakneck pace of AI-assisted development is making matters considerably worse.

Veracode's 2026 State of Software Security report, now in its 16th annual edition, drew on data from 1.6 million applications tested across its cloud platform. The findings are sobering. The company defines security debt as known vulnerabilities left unresolved for more than a year. By that measure, 82 per cent of organisations now carry it, up from 74 per cent in the previous year's report. Of those, 60 per cent carry debt classified as "critical", representing flaws severe enough to cause catastrophic damage if exploited, as reported by The Register.

Application security is an increasing problem, according to Veracode's latest report
Year-on-year changes in key security metrics from Veracode's 2026 State of Software Security report. The gap between vulnerability creation and remediation has widened markedly.

The category that should concern security officers most is high-risk vulnerabilities: flaws that combine high severity with a strong likelihood of being exploited in the wild. Those rose from 8.3 per cent to 11.3 per cent year-on-year, a 36 per cent spike. The dataset behind these figures is substantial. The full report analysed 141.3 million raw findings drawn from static code analysis, dynamic runtime testing, software composition analysis, and manual penetration testing.

The AI development paradox

The report points squarely at the rising use of AI coding tools as a compounding factor. Veracode's researchers attribute growing technical complexity in codebases partly to AI-generated code, which makes remediation harder for human engineers to complete at pace. The core problem, as The Register reported, is that new code is being added more quickly than existing vulnerabilities are being addressed. Veracode's own prior research found that roughly 45 per cent of AI-generated code contains security flaws, and that this security failure rate has remained largely unchanged even as the models producing the code have become markedly more capable at generating syntactically correct software.

This is not purely a theoretical concern. A recent experiment by Cloudflare, in which a significant application was built in roughly a week with minimal human review of the generated code, illustrates how security accountability can become diffuse when development velocity is the primary goal. The report's authors are unsparing in their assessment: "The velocity of development in the AI era makes comprehensive security unattainable," and "the remediation gap has reached crisis proportions."

There is also the threat side to consider. Malicious actors are using the same AI tools, deploying them to scan targets at scale, identify weaknesses, and generate exploit code. The report notes that attackers may also attempt to manipulate AI models through prompt injection, a technique in which crafted inputs cause a model to behave in unintended ways. The dual-use nature of these tools complicates any simple policy response.

Where the picture is less bleak

It would be inaccurate to read the report as uniformly negative. The proportion of applications carrying open-source vulnerabilities fell from 70 per cent to 62 per cent, and overall flaw prevalence edged down from 80 per cent to 78 per cent. Veracode's researchers acknowledge that better and more widespread testing tools may themselves be driving the visibility of problems that previously went undetected. If more vulnerabilities are being discovered because scanning has improved, some portion of the apparent deterioration in security debt is a measurement artefact rather than a genuine decline.

The false positive problem also deserves scrutiny. AI-assisted scanning tools can generate substantial volumes of alerts that do not represent genuine threats, creating a burden on human reviewers that may itself cause real vulnerabilities to be deprioritised. The report does not quantify the false positive rate, which means the headline figures should be read with appropriate caution.

Advocates for AI-assisted security will correctly note that these same tools can automate remediation, not merely detection. As The Register observed, Veracode itself acknowledges AI's potential to help close the gap, even as the same report documents that the gap is currently widening. That tension is not dishonesty; it reflects a genuine ambiguity in where the technology is heading.

A structural problem, not a vendor one

The deeper issue is structural. Software development has always operated under commercial pressure to ship features quickly. AI coding assistants have supercharged that pressure by compressing the time between idea and deployed code. Security review, which requires careful, context-aware human judgement, has not scaled at anything close to the same rate. The Australian Cyber Security Centre has consistently flagged unpatched vulnerabilities and poor software supply chain hygiene as primary vectors for significant cyber incidents affecting Australian organisations, public and private alike.

Veracode's recommended response, a framework it labels "Prioritise, Protect, and Prove", is sensible as far as it goes: focus resources on the most critical assets, embed security controls into development pipelines, and generate auditable evidence of compliance. The honest difficulty is that the report identifies the need for "transformational change" without specifying what that change should look like in practice. The industry's instinct will almost certainly be to promote more AI-powered security tooling as the answer, which creates a somewhat circular problem given the evidence that AI development is a material contributor to the debt accumulation in the first place.

Reasonable people can disagree about how to weigh the productivity gains from AI development against the accumulating security risks. The gains are real and significant; so is the evidence that the current approach to managing those risks is falling short. What the Veracode data makes difficult to dispute is that the status quo, scanning late and fixing later, is not working. The question for organisations, regulators, and the technology industry alike is whether the response will be genuinely transformational or simply another layer of tools applied to a problem that is, at its core, about accountability, process, and the allocation of time.

Sources (1)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.