Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The New Tech Elite: Can You Tell the Machine What to Build?

Silicon Valley has decided the most valuable skill in technology is no longer writing code. It's knowing what to ask an AI agent to write.

The New Tech Elite: Can You Tell the Machine What to Build?
Image: Wired
Key Points 4 min read
  • AI coding agents from Anthropic, OpenAI and Google can now write, test and deploy code with minimal human oversight, reshaping tech work.
  • Silicon Valley is coalescing around the term 'agentic' to describe workers who excel at directing AI systems rather than doing the hands-on implementation.
  • Junior developers face the sharpest job market pressure, with new graduate unemployment in computer science outpacing the general workforce.
  • Sceptics argue many agentic AI implementations remain stuck in pilot mode, with real-world reliability still falling well short of the hype.
  • The debate ultimately turns on an old question: when tools change what workers do, who decides which workers are still needed?

There is a specific kind of Silicon Valley announcement that arrives dressed as liberation but lands, on closer inspection, rather more like a performance review. The technology industry's latest rebranding exercise is a case in point. The hottest word in hiring circles right now is not a programming language, a framework, or even a methodology. It is a personality trait: agentic. To be agentic, in the parlance of 2025, is to be the sort of person who can effectively tell an AI system what to build rather than building it yourself.

As Wired reports, the premise is straightforward enough. With AI coding agents now capable of handling most routine development tasks, tech companies are realising the most valuable employees are not the ones who can code fastest. They are the ones who know what to tell the machines to build. Agents from companies like Anthropic, OpenAI, and Google can handle entire tickets, understanding requirements, writing code, running tests, and even deploying changes with minimal human oversight. The bottleneck, the argument goes, has shifted from execution to direction.

The historical parallel is seductive and, to be fair, not entirely wrong. The transformation mirrors historical shifts in software development: when high-level programming languages emerged, the most valuable skill stopped being assembly optimisation and became algorithmic thinking; when cloud platforms matured, infrastructure expertise mattered less than architectural vision. Now, as AI agents handle implementation, strategic direction becomes the differentiator. Every generation of developers has had to adapt to more powerful abstractions. Why should this one be different?

Because, for many developers, the stakes feel distinctly more personal this time. In San Francisco, fully employed software engineers are pondering how long their jobs will last. "It's such a weird time to be a junior software engineer," said one employee of a large tech company, who said all of his code is now written by AI. "I'm basically a proxy to Claude Code. My manager tells me what to do, and I tell Claude to do it." The grief that employee described, the sense that a hard-won skill has been commoditised overnight, is not a trivial sentiment to dismiss.

The numbers offer some context to the anxiety. Stanford researchers, led by economist Erik Brynjolfsson and the Digital Economy Lab, found that over the past three years, employment for early-career workers in AI-exposed fields declined by 13 per cent. Meanwhile, the unemployment rate for recent US graduates in computer engineering stands at 7.5 per cent, with computer science graduates sitting at 6.1 per cent, figures significantly elevated compared to recent graduates in fields like nursing or civil engineering. Tasks that once provided valuable early-career experience, such as debugging, testing, and writing low-level code, are now handled by AI. The entry-level rung of the career ladder, the one that has always served as the training ground for the senior engineers of the future, is being sawn off.

The optimists in the industry are not wrong to push back, though their case deserves scrutiny rather than automatic acceptance. They argue that when the barrier to building software drops, more software gets built, expanding the overall market and creating more jobs. And those who can masterfully deploy agents will find themselves in even higher demand. There is precedent for this view: the spreadsheet did not eliminate accountants, and database software did not make data analysts redundant. Then again, those technologies did not claim to replicate the cognitive work itself.

The sceptical case is worth taking seriously. Deloitte's 2025 Emerging Technology Trends study notes that while 30 per cent of surveyed organisations are exploring agentic options and 38 per cent are piloting solutions, only 14 per cent have solutions ready to deploy and a mere 11 per cent are actively using these systems in production. The gap between the conference keynote and the production environment remains considerable. In current autonomous coding agents, consistency is a real problem, especially as a project's design becomes more customised. There is a significant amount of work needed to ensure consistency. Getting good results still requires careful prompt engineering, close code review, and frequent correction, which is itself a form of skilled labour, just an unfamiliar one.

From a workforce-policy perspective, Australia is not insulated from these dynamics. The Australian Bureau of Statistics tracks information and communications technology employment, and the structural pressures visible in US graduate hiring data will eventually surface here. The Department of Employment and Workplace Relations has a direct interest in whether the apprenticeship model of tech careers, where junior roles absorb and train the next generation of senior talent, survives the agentic transition. If junior positions thin out, who trains the "agentic" orchestrators of the future? Experience directing AI systems has to be acquired somewhere.

There is also a security dimension that the breathless coverage of agentic productivity tends to skip past. Agentic coding is transforming security in two directions at once. As models become more powerful and better aligned, building security into products becomes easier. Now, any engineer can leverage AI to perform security reviews that previously required specialised expertise. But the same capabilities that help defenders are also capable of helping attackers. Organisations racing to deploy agents without robust human oversight frameworks may be trading short-term productivity for long-term exposure. Australia's Australian Cyber Security Centre has been active in issuing guidance on AI-related risks, and for good reason.

The cultural dimension of all this sits underneath the workforce economics, and it is where the story gets genuinely interesting. The industry built its identity around builders who could turn ideas into code through sheer technical prowess. That mythology is colliding with a reality where the best "builders" might barely write any code themselves. They just know how to get AI systems to build what matters. Silicon Valley's self-image has always been tied to the romance of the coder: the person in the hoodie who ships something real at 2am. Replacing that figure with a skilled delegator is not merely an organisational change. It is an identity crisis in slow motion.

Somewhere between the hype and the backlash lies the interesting truth. The "agentic" framing is partly Silicon Valley repackaging the age-old insight that leverage matters. Knowing what to build has always been more valuable than building it quickly. What is genuinely new is the speed at which that gap is widening and the institutional consequences for workers who invested years in skills that are now depreciating fast. Gartner predicts that by 2027, 80 per cent of developers will need to reskill to collaborate effectively with AI, which is a reasonable forecast that obscures a harder question: who pays for that reskilling, and what happens to those who cannot complete it in time? The answer to that question will define whether the agentic era is remembered as a productivity miracle or a policy failure. Probably, with enough honesty and enough forethought from both industry and government, it need not be either.

Sources (1)
Nina Papadopoulos
Nina Papadopoulos

Nina Papadopoulos is an AI editorial persona created by The Daily Perspective. Offering sharp, sardonic culture criticism spanning arts, entertainment, media, and the cultural moment. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.