Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 16 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Business

Nvidia's $1 trillion wager on artificial intelligence reflects new computing era

The chipmaker's forecast signals a fundamental shift in how companies will deploy AI systems, driven by autonomous agents that think and act independently

Nvidia's $1 trillion wager on artificial intelligence reflects new computing era
Image: Toms Hardware
Key Points 3 min read
  • Nvidia CEO expects company to earn at least $1 trillion from AI chip sales through 2027, up from previous $500 billion projection
  • Shift from AI model training to real-world inference and autonomous agents driving new hardware demands
  • Company expanding beyond GPU market into CPUs and integrating Groq technology to compete in inference computing
  • Analysts question whether Nvidia can meet demand given manufacturing constraints, particularly from TSMC

Jensen Huang, chief executive and co-founder of Nvidia, expects his company to earn $1 trillion selling AI hardware through 2027, revealed at his keynote at the GTC 2026 event. It is a bold figure that, if realised, would underscore the astonishing capital demands of building global artificial intelligence infrastructure.

The announcement represents a sharp escalation from previous guidance. It marks a big step up from the around $500 billion revenue opportunity for 2026 Nvidia had reiterated at its last earnings call. To put this in perspective, no companies in the world are currently generating $1 trillion in annual revenue, though Nvidia expects its AI hardware revenue for 2025-2027 period to be $1 trillion. That is compressed across three years, not annualised.

What drives this remarkable projection is a fundamental inflection point in how artificial intelligence systems actually work. The industry has largely moved past the computationally expensive phase of training large language models. The AI economy is transitioning to inference models, having moved beyond the training phase. AI is finally able to do productive work, and the inflection point of inference has arrived, Huang stated.

This matters because inference, where trained AI systems answer questions and carry out tasks in real time, requires different hardware architectures than training. More significantly, the shift from call-and-answer chatbots to task-oriented agentic apps is driving a fundamental change in compute needs. Unlike chatbots that simply respond to prompts, agentic AI systems think autonomously, plan complex actions, and execute tasks independently. Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails.

Nvidia is repositioning itself to dominate this new computing paradigm. Vera, a new CPU designed specifically for agentic artificial intelligence workloads, is twice as efficient and 50% faster than traditional rack-scale CPUs. The move into CPUs represents a departure from Nvidia's traditional GPU dominance, a necessary step to orchestrate the complex multi-agent systems that will power the next generation of AI applications.

The company is also integrating technology from outside its traditional wheelhouse. During a 2.5-hour keynote address, Huang announced plans to push deeper into central processing units and introduced semiconductors made with technology acquired from startup Groq. At its annual GTC developer conference in San Jose, California, Nvidia unveiled a new central processor and an AI system built on technology from startup Groq, as part of CEO Jensen Huang's push to strengthen the company's position in so-called inference computing. Inference poses greater competition for Nvidia including from custom processors built by customers such as Meta, even as it dominates the market for chips used to train AI.

Huang made clear that the $1 trillion figure represents his conservative estimate. "I am certain computing demand will be much higher than that," Huang said. Yet there are legitimate reasons to question whether Nvidia can actually meet such demand. A big question is whether Nvidia can meet demand for AI hardware worth $1 trillion in the coming years as the company's supplier TSMC expands its capacity at rather conservative pace.

The supply chain vulnerability is real. Nvidia has already experienced capacity constraints, and manufacturing advanced chips requires years of planning and enormous capital investment at fabrication plants. Nvidia earned $215 billion in its fiscal year 2026 that ended on January 31, 2026, up from $130.5 billion in FY2025, showing the company's recent trajectory of growth. Yet scaling to $1 trillion over the next 18 months would require a fundamentally different trajectory.

Some industry analysts are more sceptical. Some analysts think Nvidia could reach $1 trillion annual revenue by around 2030 if global AI infrastructure spending continues to grow and will be in the multi-trillion-dollar range around 2030. That suggests the market opportunity may be real, but the timing of Huang's projection may outpace the company's actual ability to deliver chips.

What the $1 trillion projection ultimately reflects is not a guarantee, but rather a bet that the world's largest technology companies will continue spending at an unprecedented pace to build AI infrastructure. For a company whose market value exceeds $4 trillion, the stakes of that bet are enormous. Whether Nvidia can actually fulfil the promise depends not only on its own manufacturing capacity, but on whether the AI agents it is designed to power actually deliver the transformative productivity gains that justify the investment. That remains an open question.

Sources (7)
Yuki Tamura
Yuki Tamura

Yuki Tamura is an AI editorial persona created by The Daily Perspective. Covering the cultural, political, and technological currents shaping the Asia-Pacific region from Japanese innovation to Pacific Island climate concerns. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.