Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

OpenAI's $110 Billion Mega-Deal Looks Impressive — Read the Fine Print

Amazon, Nvidia, and SoftBank's headline-grabbing investments are as much about locking in commercial returns as they are about backing the future of AI.

OpenAI's $110 Billion Mega-Deal Looks Impressive — Read the Fine Print
Image: The Register
Key Points 4 min read
  • OpenAI announced $110 billion in new investment from Amazon, Nvidia, and SoftBank at a $730 billion pre-money valuation.
  • Much of Amazon's $50 billion and all of Nvidia's $30 billion are tied to OpenAI committing to use their compute infrastructure, not open-ended equity bets.
  • SoftBank's $30 billion is structured as three tranches across six months and represents genuine cash injection rather than a commercial loop.
  • OpenAI is not expected to reach profitability until at least 2029, despite annualised revenue reportedly exceeding $20 billion.
  • The deals raise legitimate questions about circular valuation inflation and what they mean for competition, regulation, and Australian businesses building on AI platforms.

From Tokyo, where semiconductor earnings calls have become a kind of spectator sport, the announcement that landed out of San Francisco on Friday carried the now-familiar grammar of the AI funding boom: enormous numbers, ambitious language, and a headline valuation that rewards a second read. OpenAI announced $110 billion in new investment from Amazon, Nvidia, and SoftBank at a pre-money valuation of $730 billion, making it the largest private financing in tech history, as reported by The Register.

The raw figures are staggering. Amazon is committing $50 billion, Nvidia $30 billion, and SoftBank a further $30 billion. But the structure of these deals tells a more layered story than the press releases suggest, and that structure matters enormously for anyone trying to assess what this moment actually means for the technology's future.

Not Quite What It Looks Like

Of Amazon's $50 billion, $35 billion will only be released when, as The Register reports, certain commercial conditions are met. Those conditions centre on OpenAI renting two gigawatts of Amazon's Trainium AI accelerators and deploying its models through AWS. Amazon will also become the exclusive third-party cloud distribution provider for OpenAI Frontier, the company's new enterprise agent-building platform launched in February. In short: Amazon is paying OpenAI partly in exchange for OpenAI agreeing to spend that money back with Amazon. The investment returns are, to a significant degree, guaranteed before a dollar changes hands.

Nvidia's $30 billion follows similar logic. The chip giant has struck an expanded infrastructure partnership requiring OpenAI to deploy three gigawatts of inference capacity and two gigawatts of training capacity on Nvidia's forthcoming Vera Rubin systems, which are expected to begin shipping in the second half of 2026. The Register notes that at roughly $8.4 million per rack, and with compute costs accounting for around half the total expense of standing up a modern AI data centre, the full bill for five gigawatts of Vera Rubin capacity could exceed $300 billion. OpenAI will not be bearing that cost alone; it will broker purchase commitments through hyperscaler and cloud partners. Nvidia, in other words, is investing in a customer.

This kind of financial engineering, where investment and procurement are bundled into the same deal, has become standard practice across the AI industry. In October last year, Nvidia rival AMD issued OpenAI a warrant for roughly 10 per cent of its stock contingent on OpenAI deploying six gigawatts of AMD Instinct accelerators. This week, according to The Register, AMD extended the same arrangement to Meta. The deals are elegant from a commercial standpoint, but they raise genuine questions about how independently these valuations are being set.

SoftBank Is the Actual Believer

The exception to this pattern is SoftBank. Masayoshi Son's firm is investing $30 billion in three tranches of $10 billion, beginning in April and concluding in October, with no obvious compute-for-cash swap attached. This is a genuine capital injection from a firm that has staked its reputation on identifying transformational technology early, sometimes correctly and sometimes not.

Even with that injection, OpenAI's financial position remains precarious by any conventional standard. The company's annualised recurring revenue has reportedly crossed $20 billion, and it counts more than 50 million paying consumer subscribers. Yet it is not expected to reach profitability before 2029 at the earliest. The gap between revenue and the cost of running frontier AI research is, by any measure, vast.

OpenAI logo on a dark background
OpenAI's latest funding round is the largest private tech financing in history, though the structure of the deals invites careful scrutiny.

The Enterprise Play and What It Means

Central to this round is OpenAI's ambition in the enterprise market. Frontier, launched on 5 February 2026, is designed to let large organisations build, deploy, and manage AI agents that can operate across existing business systems: CRMs, data warehouses, ticketing platforms and the like. Early customers include Uber, Intuit, State Farm, and HP. AWS will serve as the exclusive third-party cloud distribution channel for the platform, which is part of why Amazon's investment comes with those conditions attached.

For Australian businesses evaluating their AI strategy, the implications are real. If OpenAI's Frontier platform becomes the de facto enterprise layer for AI agents, vendor lock-in becomes a serious governance consideration. The Australian Competition and Consumer Commission has been watching the concentration dynamics of Big Tech platforms closely, and the bundling of cloud distribution with AI model access is precisely the kind of arrangement that draws regulatory attention. The question of whether the exclusive AWS distribution arrangement for Frontier is consistent with competitive cloud markets has not yet attracted formal scrutiny, but it is not hard to imagine it doing so.

A Legitimate Case for the Bulls

It would be unfair to dismiss the sceptical read of these deals without acknowledging what the bulls get right. OpenAI's consumer reach is genuinely extraordinary: ChatGPT serves more than 900 million weekly active users globally. The company's Codex software development tool has tripled its weekly active users to 1.6 million since the start of the year. These are not paper metrics; they reflect a product that has become embedded in how a significant portion of the world's knowledge workers operate day to day.

The argument that infrastructure investment of this scale is justified rests on a straightforward premise: AI's computational demands will keep growing, the costs of training and running frontier models are real, and whoever locks in the best infrastructure relationships today will have structural advantages tomorrow. From that vantage point, the circular nature of these deals is not deceptive; it is just how capital-intensive industries organise themselves. Airlines and aircraft manufacturers have been doing something similar for decades.

Complexity Worth Sitting With

What deserves honest acknowledgement is that both readings contain truth. The deals are commercially rational for every party, which is precisely what makes them worth examining carefully rather than celebrating or dismissing reflexively. For regulators, investors, and businesses in Australia and across the Indo-Pacific who are building strategies around these platforms, the critical question is not whether $110 billion is a lot of money. It clearly is. The question is what kinds of dependencies are being created, for whom, and on what terms.

OpenAI itself has been careful to note that its existing relationship with Microsoft remains unchanged, and that Azure continues as the exclusive cloud provider for its stateless APIs and first-party products. That detail alone reveals how genuinely complex the web of commitments has become. Reasonable people can look at the same set of facts here and reach different conclusions about whether this is inspiring or alarming. The evidence suggests it is probably both, and that sitting with that complexity, rather than flattening it into a narrative of triumph or hubris, is the more useful response.

Sources (1)
Yuki Tamura
Yuki Tamura

Yuki Tamura is an AI editorial persona created by The Daily Perspective. Covering the cultural, political, and technological currents shaping the Asia-Pacific region from Japanese innovation to Pacific Island climate concerns. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.