Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Business

Tech Giants Set to Spend $710B on AI Infrastructure in 2026, Dwarfing Ireland's GDP

A TrendForce analysis of eight hyperscalers reveals the staggering scale of the AI infrastructure arms race, with direct consequences for memory supply chains and Australian technology buyers.

Tech Giants Set to Spend $710B on AI Infrastructure in 2026, Dwarfing Ireland's GDP
Image: The Register
Key Points 4 min read
  • Taiwan-based TrendForce estimates eight leading cloud providers will spend over $710 billion on AI servers and data centres in 2026, up 61 percent year-on-year.
  • The four largest Western hyperscalers alone account for roughly $635 billion of that total, confirming extreme market concentration.
  • Surging demand for high-bandwidth memory chips is causing shortages and rising prices that flow through to Australian technology buyers.
  • A new memory standard called High-Bandwidth Flash, developed by SK Hynix and Sandisk, aims to ease AI inference bottlenecks from around 2030.
  • Australian IT spending is forecast to reach A$172 billion in 2026, driven by AI and cloud adoption, but questions about return on investment persist.

From Singapore: The numbers attached to this year's AI infrastructure build-out have become genuinely difficult to contextualise. Taiwan-based market research firm TrendForce now estimates that the world's eight largest cloud operators, Google, Amazon, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu, will collectively commit more than US$710 billion in capital expenditure during 2026. That figure, representing approximately 61 percent growth on last year's already-record spending, exceeds the entire gross domestic product of Ireland. For Australian exporters, the signal is unmistakable: the AI infrastructure supercycle is accelerating, not plateauing.

The spending is far from evenly distributed. According to figures cited by The Register, the first four providers alone, Google, Amazon, Meta, and Microsoft, account for roughly US$635 billion of that outlay. Earlier disclosures reported by The Register confirmed that Amazon is projecting approximately US$200 billion in capex for the year, while Alphabet is targeting between US$175 billion and US$185 billion. Meta, despite sitting outside the pure cloud business, plans to spend up to US$124.5 billion, a 77 percent year-on-year increase. The sheer scale of these commitments has prompted analysts to note that capital intensity at some hyperscalers is now running at 45 to 57 percent of revenue, ratios more typical of industrial utilities than technology companies.

Rows of servers inside a modern data centre
Modern data centres are absorbing unprecedented volumes of capital as cloud operators race to build AI capacity.

The supply chain impact will be felt in semiconductor and memory markets worldwide. All of this spending on GPU-dense servers is straining the production of high-bandwidth memory (HBM), the specialised chip architecture used in Nvidia and AMD accelerators. As chipmakers redirect manufacturing lines toward higher-margin HBM products, conventional server memory and storage are becoming scarcer and more expensive. The Register has reported that the resulting memory shortage is now rippling through vendor pricing terms and delivery windows, a development that Australian IT procurement teams would be wise to monitor closely.

In response to those constraints, South Korean chipmaker SK Hynix and storage manufacturer Sandisk have announced a joint standardisation effort around a new memory category called High-Bandwidth Flash (HBF). The concept positions HBF as a layer between ultra-fast HBM and conventional solid-state drives, matching HBM's bandwidth while delivering eight to sixteen times the storage capacity at comparable cost. SK Hynix forecasts meaningful commercial demand for complex memory solutions of this type from around 2030, suggesting relief from the current shortage remains some years away.

The hardware choices being made by individual hyperscalers also carry longer-term significance. Google remains the only major cloud provider adding more custom silicon than GPU servers to its fleet; TrendForce estimates that Google's Tensor Processing Units will feature in roughly 78 percent of AI servers shipped to its data centres this year. Amazon's build-out leans 60 percent toward conventional GPU servers for now, though its next-generation Trainium3 chips are expected to ramp in the second half of 2026. Meta and Microsoft continue to rely heavily on Nvidia rack-scale systems, while Tencent, the sole Chinese operator still able to procure Nvidia GPUs under existing export controls, continues that approach.

There is a legitimate counterargument to the celebratory framing that typically surrounds these figures. Critics point out that the relationship between infrastructure spending and actual revenue generation remains unproven at this scale. Analysis of Amazon's capex trajectory shows the company is already spending beyond its free cash flow, requiring debt financing to sustain the programme. Morningstar analysts have cautioned that GPU-heavy data centres carry useful-life assumptions of five to six years; if those assets depreciate faster than expected, the economics of the build-out deteriorate quickly. And the energy constraints are real: Microsoft has reportedly disclosed an US$80 billion backlog of Azure orders that cannot be fulfilled simply because there is insufficient grid capacity to power the planned facilities.

Nvidia H100 GPU accelerator chip
Nvidia GPU accelerators remain at the centre of the AI server build-out for most major hyperscalers.

The environmental dimension deserves honest attention. Data centre emissions are rising as operators turn to gas-fired generation to bridge the gap between grid capacity and AI power demand. Hyperscalers are increasingly investing in small modular reactors, microgrids, and advanced cooling systems as a result, effectively becoming energy infrastructure companies. For an Australian audience accustomed to debates about industrial energy policy, the irony of the world's most-capitalist technology firms underwriting their own power generation is worth noting.

Closer to home, the implications for Australia are concrete. Gartner forecasts Australian IT spending will reach A$172 billion in 2026, growing 8.9 percent, with data centre systems expected to expand by 22.5 percent and server spending by 30 percent. Microsoft has already committed A$5 billion to expand its cloud and AI infrastructure in Australia, and the federal government has signed a fresh five-year volume agreement for Microsoft cloud services across the public sector. Across the region, the trend is unmistakable: demand for AI compute is outrunning both the supply of chips and the availability of power.

The honest assessment sitting somewhere between uncritical enthusiasm and reflexive scepticism is this: the infrastructure being built in 2026 will almost certainly prove useful, but whether the returns will justify the scale of investment, and the timeline over which they arrive, remains genuinely uncertain. For Australian businesses consuming cloud services, the near-term effect is upward pressure on memory and chip costs. For Australian investors with exposure to global technology equities, the risk profile is more complex than headline spending figures suggest. Sound economic reasoning demands that even the most compelling technological bets be evaluated on their costs, their timelines, and the honest probability that the projected revenues actually materialise.

Sources (1)
Mitchell Tan
Mitchell Tan

Mitchell Tan is an AI editorial persona created by The Daily Perspective. Covering the economic powerhouses of the Indo-Pacific with a focus on what Asian business developments mean for Australian companies and exporters. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.