Here is a question worth asking plainly: if a technology is being used by nine out of ten professionals in an industry, yet measurable productivity gains remain stuck in the single digits, what exactly is being sold to whom?
That is the question sitting at the heart of the AI coding revolution, a phenomenon that has moved from a viral tweet to a defining feature of the global software industry in the space of twelve months. The term "vibe coding" was coined by AI researcher Andrej Karpathy in early 2025, describing a workflow where the primary role shifts from writing code line by line to guiding an AI assistant through a conversational process. The idea caught on with remarkable speed. Karpathy introduced the term in February 2025, Merriam-Webster listed it as a "slang and trending" expression by March, and it was later named the Collins English Dictionary Word of the Year for 2025.
The adoption numbers are genuinely striking. A striking 92.6% of developers now use an AI coding assistant at least once a month, and roughly 75% use one weekly; clearly, AI is no longer a side experiment but part of the workflow. Among 4.2 million developers surveyed between November 2025 and February 2026, AI-authored code now makes up 26.9% of all production code, up from 22% the previous quarter, with daily AI users having nearly a third of their merged code written by AI.
The fundamental question is whether any of this is actually making software better, or faster, or cheaper to maintain. And here the picture becomes considerably less comfortable for the boosters.
A randomised controlled trial conducted by METR, designed to understand how early-2025 AI tools affect the productivity of experienced open-source developers, found that when developers used AI tools, they actually took 19% longer to complete tasks than without them. This is not a minor caveat. Before starting tasks, developers had forecast that AI assistance would reduce completion time by 24%; after finishing, they estimated it had reduced time by 20%. The reality was the opposite. People believed they were being helped when the data suggested otherwise.
The security picture is no more reassuring. Vibe coding has raised serious concerns about understanding and accountability, with developers using AI-generated code without fully comprehending its functionality, leading to undetected bugs, errors, or security vulnerabilities. In May 2025, Lovable, a Swedish vibe coding application, was reported to have security vulnerabilities in the code it generated, with 170 out of 1,645 Lovable-created web applications having an issue that would allow personal information to be accessed by anyone. These are not edge cases; they are structural risks baked into a methodology that treats code review as optional.
The counter-argument deserves serious consideration: the critics are largely measuring performance among experienced developers working on mature, complex codebases. That is an important constraint. METR itself acknowledged that its results do not mean AI is useless in software engineering; it seems plausible that AI tools are useful in other contexts, for example for less experienced developers, or for developers working in an unfamiliar codebase. AI does lower the barrier to entry for new creators and acts as a force multiplier for experienced developers when properly applied, allowing everyone to focus more on creative problem-solving and less on manual implementation.
There are also genuine enterprise wins on the record. Data collected across more than 121,000 developers shows that AI is dramatically speeding up the onboarding process, with the time taken for new developers to reach their tenth pull request being cut in half. According to Stack Overflow's 2025 Developer Survey, 65% of developers now use AI coding tools at least weekly. The technology is clearly doing something for somebody.
Strip away the talking points and what remains is a technology with genuine utility in specific, bounded contexts, being applied promiscuously across contexts where it may cause more harm than good. While AI-assisted coding may be suitable for prototyping or throwaway weekend projects, as Karpathy originally envisioned, it is considered by experts to pose risks in professional settings where a deep understanding of the code is crucial for debugging, maintenance, and security. Across 2025, the industry itself began to recognise this, with a loose, vibes-based approach giving way to a more systematic approach to managing how AI systems process context.
For Australian technology businesses, the calculus involves one further dimension. The economics of serving large language models provides a real incentive for American technology companies to hire engineers whose working hours fall outside US peak demand periods, meaning Australian engineers operating in their own time zone could find themselves in genuine demand. That is an intriguing structural advantage, but only for engineers who retain the foundational skills to supervise and interrogate AI-generated output rather than simply accepting it.
The core competency for developers in this environment is no longer just writing code, but effectively orchestrating the AI tools that write code alongside them. That is a meaningful distinction. Orchestration requires judgement, domain expertise, and an understanding of what good code actually looks like. Rather than replacing developers, AI is enhancing their abilities, speeding up workflows, automating repetitive tasks, and allowing engineers to focus on higher-level problem-solving, provided those engineers possess the baseline to make those judgements in the first place.
The GitHub Copilot generation of tools, and the vibe coding philosophy more broadly, will not disappear. The economic pressures driving adoption are real, and the technology will continue to improve. But the organisations and individuals who will benefit most are those who treat AI as a capable junior colleague requiring supervision, not as an oracle whose output can be deployed without scrutiny. The industry is, slowly, arriving at this conclusion itself. Whether it does so before a significant security or reliability incident forces the conversation is the more pressing question.
Reasonable people can disagree about how quickly to integrate these tools, and about where to draw the line between speed and rigour. What they cannot reasonably disagree about is the need to draw that line at all. The Australian Bureau of Statistics does not yet track AI coding adoption specifically, but the global trajectory is clear enough: the tools are here, the risks are documented, and the professional responsibility to use them thoughtfully falls squarely on the humans still nominally in charge.