Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 18 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The AI coding paradox: too useful to resist, too risky to fully trust

Developers face a dangerous choice as intelligent agents handle more code with less human oversight

The AI coding paradox: too useful to resist, too risky to fully trust
Image: The Register
Key Points 3 min read
  • AI coding tools are increasingly autonomous, handling entire features with minimal human supervision, yet this reduces developer experience needed to catch errors.
  • Research shows AI-generated code can contain security flaws and vulnerabilities at rates of 25-60%, yet developers often feel more confident in the results.
  • Experts warn of a 'lethal trifecta' risk: when agents have access to untrusted content, private data, and external communication capabilities simultaneously.
  • The tension between productivity gains and skill erosion reflects a genuine trade-off that organisations must actively manage through governance, not ignore.

At QCon London, Birgitta Böckeler, global lead for AI-assisted software delivery at Thoughtworks, warned that AI is in a dangerous state where it is too useful not to use, but by using it developers are giving up the experience they need to review what it does.

The concern reflects a genuine bind facing organisations deploying advanced coding agents. Advances in agent orchestration, sub-agents, and reduced supervision are forming strong forces that are tempting humans out of the loop. Yet this apparent efficiency masks a deeper problem. Böckeler noted that the industry is getting into a dangerous state where AI is so useful that developers want to use it, but they cannot and maybe never will be able to give it everything, and at the same time they are getting less experience because they are not doing the work themselves anymore.

Birgitta Böckeler, Thoughtworks AI lead, tells QCon that strong forces are tempting humans out of the loop
Birgitta Böckeler speaks at QCon London 2026 on the risks of reduced human oversight in AI development.

The practical risks are substantial. AI is not safe: it makes errors and is vulnerable to issues such as prompt injection. Independent research reinforces this concern. Research indicates that approximately 25-30% of code generated by models like GitHub Copilot contains Common Weakness Enumerations (CWEs). More broadly, a recent study found that 62% of AI-generated code solutions contain design flaws or known security vulnerabilities, even when developers used the latest foundational AI models.

The danger extends beyond code quality to developer psychology. A recent study found that coders who used an AI assistant wrote significantly less secure code than those who did not, and even though they created less secure coding, these participants believed their code was actually more secure than the manual creation group. This combination of reduced scrutiny and inflated confidence amplifies risk.

Böckeler identified what she framed as a critical security concern. When an agent has exposure to untrusted content and access to private data and can externally communicate, there is a high risk of data and security problems; giving an agent read and send rights to email alone is enough to trigger this problem. This scenario, sometimes called the "lethal trifecta," represents a structural vulnerability in how autonomous agents are deployed.

Robot arm control the Steel sheet cutting process in the industrial factory
Autonomous systems are increasingly integral to software development workflows.

The industry is not ignoring these warnings. Whilst adoption is widespread at 84% among developers with around 51% of professionals using AI tools daily and material productivity gains of approximately 3.6 hours per week, independent code analysis raises a clear caution that AI-assisted code can increase issue counts by 1.7 times and security findings if not paired with governance.

Developers are in the business of risk assessment, which involves a combination of three things: probability, impact, and detectability. The challenge is that these variables shift as tools become more powerful. Böckeler observed that the longer an agent goes without supervision, the more review it requires afterwards.

Solutions exist but require discipline. Training developers on secure prompting is essential, since prompts are now the code design specification, and developers need to be taught not just how to use AI coding assistants but how to guide them with specificity. Conducting thorough code reviews and exercising control over LLM outputs helps secure AI-driven development against evolving threats.

The tension between productivity and risk is real and cannot be wished away. Organisations deploying AI coding agents must treat them not as fully trusted collaborators but as tools requiring active governance. The productivity gains are genuine, but so are the risks. The question for development teams is not whether to use AI, but how to use it responsibly.

Sources (6)
Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.