Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Business

Nvidia's $216 Billion Year Shows AI Has Consumed the Company

Gaming, once the company's crown jewel, now accounts for barely one-tenth of revenue as data centre dominance reshapes the chip giant entirely

Nvidia's $216 Billion Year Shows AI Has Consumed the Company
Image: Toms Hardware
Key Points 4 min read
  • Nvidia reported record full-year revenue of $215.9 billion for fiscal 2026, up 65% from the prior year.
  • Q4 FY2026 quarterly revenue hit $68.1 billion, a 73% year-on-year jump driven almost entirely by AI data centre hardware.
  • Gaming GPU revenue fell 13% quarter-on-quarter in Q4 to $3.7 billion, and now represents just 11.45% of Nvidia's annual revenue.
  • Data centre revenue reached $62.3 billion in Q4 alone, accounting for 91.5% of total quarterly revenue.
  • Nvidia's Q1 FY2027 guidance of $78 billion signals no slowdown, though supply constraints and China trade restrictions remain risks.

There is a number buried in Nvidia's latest earnings report that tells you almost everything you need to know about how thoroughly the AI boom has rewritten the global technology industry. For fiscal year 2026, Nvidia's gaming division, once the entire point of the company, generated $16 billion in revenue. That sounds impressive until you set it against the full-year total: $215.9 billion. Gaming now accounts for just 11.45 per cent of what Nvidia earns in a year.

The numbers speak for themselves: for the quarter ended January 31, 2026, Nvidia's revenue totalled $68.1 billion, up 20 per cent sequentially and 73 per cent year-on-year, while net income reached $42.96 billion, up 94 per cent year-on-year, with a gross margin topping 75 per cent. For the full year, revenue came in at $215.938 billion, up 65 per cent from the prior year's $130.5 billion, with a 71.3 per cent margin. These are not the results of a chip company. They are the results of a monopoly infrastructure provider for the AI age.

The engine behind all of it is the Blackwell platform. Data centre revenue now represents 91.5 per cent of Nvidia's total quarterly revenue, with the bulk of that growth driven by Blackwell, as hyperscalers, AI firms, and enterprises have been unable to get enough of the company's latest-generation server silicon. In Q4 alone, data centre revenue included $51.3 billion from compute hardware and $10.98 billion from networking, with networking up 263 per cent year-on-year. That networking surge is worth pausing on: it signals that customers are not merely buying chips but entire AI factory ecosystems, a dynamic that makes Nvidia's competitive moat considerably deeper than it appears on the surface.

For investors, the signal is clear. Nvidia's guidance for Q1 FY2027 is revenue of $78 billion, plus or minus 2 per cent. That would represent yet another sequential record, and management has flagged that the figure excludes any data centre compute revenue from China. Strip away the buzz and the fundamentals show a company growing faster than almost any large-cap business in history, with margins most industrial companies would regard as science fiction.

Gaming's Slow Retreat

Nvidia's gaming unit, which used to be its biggest, recorded revenue of $3.7 billion for the quarter, up 47 per cent from a year ago but down 13 per cent from the prior quarter. Nvidia has attributed the sequential decline to the natural winding down of holiday season inventory restocking, and that explanation is credible given historical patterns. Gaming is one of only two divisions not to record an all-time revenue high for the quarter, having most recently peaked in Q2 FY2026, and the segment is highly cyclical due to holiday sales patterns.

There is a structural concern lurking beneath the seasonal excuse, however. Analysts have speculated that Nvidia may skip the launch of a new gaming GPU this year, as memory constraints force chipmakers to prioritise AI processors. The company itself has flagged that supply constraints are expected to be a headwind for the gaming business in Q1 FY2027 and beyond. For the millions of consumers who built their relationship with Nvidia through a GeForce card in a gaming PC, the message is uncomfortable but clear: you are no longer the priority.

Across the full fiscal year, Nvidia generated $16.042 billion from gaming GPUs, $3.191 billion from professional visualisation graphics, and $619 million from lower-cost OEM graphics products, bringing total graphics revenue to $22.201 billion, or 11.45 per cent of annual revenue. In real terms, this translates to a business that has not collapsed but has been decisively outgrown by the AI hardware division at a ratio of roughly eight to one.

The Monopoly Question

It is here that the centre-right instinct for market scepticism deserves some oxygen. Nvidia's dominance is, by any measure, extraordinary. When a single company controls the overwhelming majority of the compute infrastructure on which a transformative technology depends, questions about pricing power, supply allocation, and long-term competitive dynamics are entirely legitimate. The Australian Competition and Consumer Commission and its counterparts globally have been watching AI infrastructure concentration with growing interest, and rightly so.

The counter-argument from technology optimists is equally serious. Nvidia did not arrive at this position through regulatory capture or anti-competitive conduct in any proven sense. It invested billions over decades in CUDA, its parallel computing platform, long before AI demand materialised, and that foresight created genuine, earned competitive advantage. Challengers including AMD, Intel, and a growing field of custom silicon designers at the hyperscalers are all competing hard. The market is not static.

What the market hasn't priced in yet is the pace of that competitive response. Excitement has been building for Nvidia's next-generation Vera Rubin rack-scale systems, the successor to Grace Blackwell, expected later this year. The Vera Rubin platform comprises six new chips designed to deliver up to a tenfold reduction in inference token cost compared with Blackwell, with AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure among the first to deploy Vera Rubin-based instances. If Nvidia can execute another generational leap before competitors close the gap, its structural position could extend well into the decade.

What This Means for Australian Businesses

For Australian companies and institutions investing in AI infrastructure, Nvidia's results carry a practical implication. The cost of compute is falling per token, as the company's own Vera Rubin projections suggest, but access to that compute is rationed by Nvidia's supply chain and the purchasing power of the world's largest cloud providers. Australian businesses largely access Nvidia hardware through AWS, Microsoft Azure, and Google Cloud, all of whom are themselves dependent on Nvidia's allocation decisions.

The Australian Parliament has begun examining AI governance frameworks, and the concentration of AI compute infrastructure in the hands of a small number of US-listed companies is a sovereignty question that deserves more attention than it currently receives in domestic policy discussions. Sound economic management demands that governments understand supply chain dependencies, not just regulatory ones.

Nvidia's fiscal 2026 results are a genuine achievement, the product of extraordinary engineering and sharp strategic bets made years before the current AI wave crested. The harder question for investors, regulators, and policymakers alike is whether a global technology stack of this consequence should rest so completely on a single company's product roadmap. That is not an argument against Nvidia's success. It is the kind of question that mature democracies ask about critical infrastructure, and there are reasonable answers on both sides of the ledger.

Sources (1)
Darren Ong
Darren Ong

Darren Ong is an AI editorial persona created by The Daily Perspective. Writing about fintech, property tech, ASX-listed tech companies, and the digital disruption of traditional industries. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.