Software development is experiencing a transformation so rapid that seasoned programmers find themselves questioning whether their skills remain relevant. What once required weeks of work now takes hours. Coding as a solitary craft is giving way to a new model where developers orchestrate AI agents, describing problems in natural language and watching systems generate complete, functioning applications.
This shift centres on "vibe coding," a term that gained traction in 2025 when Andrej Karpathy, a former OpenAI researcher, coined it to describe the practice of building software through natural language prompts rather than traditional syntax-heavy programming. The concept found its fullest expression in tools like Anthropic's Claude Code, which runs in the command line and can manage multi-file edits, run tests, and iterate on tasks with minimal human input.
The adoption curve has been steep. According to Stack Overflow's 2025 Developer Survey, 65% of developers now use AI coding tools at least weekly, up from near-zero adoption just three years ago. GitHub's latest State of the Developer report shows 92% of developers are now using AI-powered coding tools, representing a 40% increase from just two years ago. At major tech companies, the practice has become routine; AI now writes as much as 30% of Microsoft's code and more than a quarter of Google's.
For some developers, the experience borders on transformative. One practitioner described riding the subway home, uploading files to Claude Code on his phone and typing prompts like "Load this into a database and make it searchable with a web interface," with the app largely built by the time his train crossed the Manhattan Bridge into Brooklyn. A non-technical person without coding skills has released an app on the App Store with over 100 users using Claude's assistance.
Yet productivity gains mask genuine structural problems that industry observers are only beginning to grapple with. Senior engineers are drowning in AI-generated code that requires intense scrutiny because they cannot trust the AI's logic implicitly and must verify every line, often without the context of having written it. Average pull request sizes have increased 150%, leading to a 9% rise in bug counts, suggesting that while code ships faster, defects ship faster too.
The human cost is also becoming visible. A Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, coinciding with the rise of AI-powered coding tools. Some experienced developers report that AI tools can actually slow them down; developers cite a sense that AI tools are hollowing out the parts of their jobs that they love, turning coding from creative problem-solving into janitorial work of fixing and managing AI output.
Quality questions extend beyond code structure to security and maintainability. Many AI coding tools are trained on historical repositories, creating a risk that they lack real-time vulnerability awareness and will happily draw from vulnerable libraries. The danger isn't necessarily that AI writes bad code; it's that AI writes working code so quickly that teams ship features before addressing structural problems.
Industry voices are divided on whether the current trajectory represents genuine progress or a dangerous illusion of productivity. While individual developer velocity has increased, organisational throughput often faces new unforeseen impediments. Some see the future as fundamentally optimistic; others warn that ease of code generation has become decoupled from the harder problems of architecture, security, and maintenance that determine whether software actually works in production.
The technology itself continues advancing rapidly. The "Copilot" era, characterised by a human typing and AI suggesting completions, is giving way to the "Agentic" era where the human sets a goal and the AI executes a multi-step plan to achieve it. Tools are becoming more capable at understanding full codebases and making coherent changes across multiple files simultaneously.
For developers navigating this landscape, the practical question is no longer whether to adopt AI tools but how to use them responsibly. Humans will still need to understand and maintain the code underpinning their projects for the foreseeable future, and one pernicious side effect of AI tools may be a shrinking pool of people capable of doing so. Success appears to require understanding both the promise and the genuine pitfalls: enthusiasm tempered by rigorous code review, speed balanced against long-term maintainability, and democratised access weighed against the reality that building software well still demands substantial knowledge and judgment.