Broadcom says it has a line of sight to more than $100 billion in revenue from artificial intelligence chips by 2027, touting progress from Anthropic, Meta and OpenAI. It is a striking forecast, delivered with the kind of certainty that makes investors sit up and listen. Yet beneath the bold projection lies a more interesting claim: the company's leadership is openly betting that the world's richest tech firms simply cannot solve the problem of building and manufacturing their own silicon at commercial scale.
CEO Hock Tan pointed to 106 percent year-over-year growth for AI-related silicon, which brought in $8.4 billion of revenue for the quarter. That growth is real, and it reflects genuine demand. But Broadcom's argument about why this growth will persist requires scrutiny. If Tan is right that hyperscalers cannot achieve what his company has mastered, the company's fortress appears unassailable. If he is wrong, Broadcom's dominance may be temporary.
The Case for Specialisation
Broadcom's position is superficially compelling. Tan argued that homebrew chipmaking efforts must create chips competitive with NVIDIA and other AI players, noting that "anybody can design a chip in a lab that works well," but asked "can you produce 100,000 of those chips quickly, at yields that you can afford?" and stated "we do not see too many players in the world that can do that." This reflects genuine economic reality. Chip manufacturing at scale is brutal. It demands not just design talent, but familiarity with yield optimisation, supply chain choreography, advanced packaging technologies, and the political economy of securing fabrication plant capacity from companies like Taiwan Semiconductor Manufacturing Company.
The company has tangible evidence of its usefulness. Anthropic will implement one gigawatt of Broadcom-baked TPUs in 2026 and plans a three-gigawatt deployment in 2027, while Meta will install multiple gigawatts of Broadcom's XPU accelerators in 2027 and beyond. OpenAI is expected to deploy over one gigawatt of its first-generation custom chip in 2027. These are not hypothetical commitments. They represent hundreds of millions of dollars in confirmed orders.
The Counter-Argument
Yet the narrative that outside firms cannot succeed in custom silicon is harder to sustain when examined closely. The tech giants pouring capital into in-house accelerator programs are not naive. They have money, talent, and strategic motivation. Google developed the TPU in-house starting in 2015. Meta is openly designing its own chips. Microsoft, Amazon, and Apple are doing the same. These companies are not dabbling; they are committing hundreds of billions of dollars to reduce their dependence on outside suppliers.
Google designed its tensor processing units alongside Broadcom and made its chips available to cloud customers since 2018, with key customers now including Apple and Anthropic. This suggests that the relationship between hyperscaler and design partner is not static. Today's close collaboration can become tomorrow's redundancy. Broadcom itself is helping these firms transition from customers dependent on its expertise to competitors who understand how to operate without it.
The Pragmatic Middle Ground
Somewhere between Broadcom's confidence and the hyperscalers' ambition lies a more honest assessment. The company is right that manufacturing scale matters enormously. It is probably right that no single AI startup will replicate its capabilities in the near term. Even if Broadcom hits its targets, Nvidia will dwarf the company in AI revenue, with Nvidia expected to generate around $333 billion in fiscal 2027 from AI data center customers. The smartphone analogy is instructive: Apple designs the A-series chip, but partnering with TSMC is non-negotiable. This model can persist for years.
However, Broadcom's assertion that hyperscalers cannot match its abilities "for many years to come" deserves skepticism. Google has already done it. Meta is doing it. The barriers Tan describes are formidable but not insurmountable when a company can spend $10 billion on a single supplier relationship, hire the top 1 per cent of silicon engineers, and absorb losses while optimising yields. Broadcom's moat is real, but it is shrinking.
The sensible position is this: Broadcom has solved a hard problem and will profit handsomely from it. The custom accelerator business is a legitimate, durable revenue stream. But the company's forecast of $100 billion in annual AI chip sales is built partly on the assumption that the competitive landscape will remain frozen. Capital and talent tend to thaw such frozen states eventually. Broadcom should plan for a world where it remains essential but not dominant.