Nvidia CEO Jensen Huang has proposed something unusual: paying his engineers in artificial intelligence. Not in salary cuts, but in access to AI compute power delivered through tokens, the basic units that AI systems consume when processing text or code.
Under Huang's model, engineers would make a few hundred thousand dollars in base pay annually, with another half of that amount given as tokens so they could be "amplified 10 times." For a $500,000 engineer, he expects at least $250,000 worth of token consumption each year, saying he would be "deeply alarmed" if they spent far less.
The thinking is straightforward. Tokens are units of data used by AI systems that can be spent to run tools and automate tasks, becoming "one of the recruiting tools in Silicon Valley." Nvidia is trying to spend $2 billion annually on tokens for its engineering team. If engineers have ready access to billions of tokens, the theory goes, they will delegate more work to AI agents, radically accelerating output. Huang said AI agents have turned month-long development cycles into 30 minutes.
This is not Huang's private fantasy. Silicon Valley is already experimenting with new ways to compete for talent by turning to AI inference power as a "fourth component" in recruitment, with some investors suggesting that companies should list token budgets on hiring notices.
The Productivity Question
The case for token-based compensation has genuine merit. Data analysis that once required specialized teams becomes something any product manager can do over lunch. Work that used to take months now takes a couple of days. If AI truly multiplies engineering output, then offering compute power as compensation makes economic sense; it costs Nvidia chips and electricity, not cash.
But there is a significant wrinkle. Roughly 80% to 85% of AI projects have failed since 2018. More than half of CEOs have yet to see clear benefits from AI deployments, with only about 12% getting higher revenues and reduced costs. Huang's vision assumes enterprises can scale from failed pilots to enterprise-wide agent deployments involving thousands of autonomous systems operating with minimal human oversight. Governance frameworks for autonomous agents remain nascent, and organisations that struggled with basic chatbot deployments now face managing thousands of agents operating across interconnected systems.
The Self-Interest Problem
Huang's proposal also reveals something worth acknowledging: self-interest wrapped in vision. He argued that companies should think of compute allocation the way they once thought about office space or health insurance, framing the idea as transforming a company of 50,000 people into one operating as if it had 500,000. If that narrative takes hold across the Fortune 500, demand for Nvidia's chips extends further into the future.
This does not necessarily make Huang wrong. But critics might view it as an indirect way to lock talent into Nvidia's ecosystem, given the company's dominance in AI GPUs. Competing chip makers cannot offer equivalent token budgets if they lack Huang's manufacturing scale and market share. Engineers accepting token compensation benefit most if they work within Nvidia's ecosystem.
A Genuine Shift
Yet the underlying economic shift is real. Morgan Stanley estimated that AI-driven productivity gains could reduce the need for incremental software engineering hires by 20-30% within three years at major technology companies. Around 65% of executives expect 11% to 30% of their workforce to be reskilled as AI reshapes job functions by 2026. In that environment, token budgets are not wild speculation; they are a logical response to talent competition in a world where compute scarcity has become a real constraint on productivity.
Huang's proposal amounts to this: treat AI access as a productivity tool, price it transparently, and let engineers spend it on the tasks they believe will generate the most value. Whether it actually delivers the 10x multiplier he imagines depends not on Huang's rhetoric, but on whether enterprises can finally turn their years of failed AI experiments into reliably functioning autonomous systems.