From Washington: Nvidia moved decisively into direct CPU competition with Intel and AMD on Monday, unveiling a fully liquid-cooled rack system that packs 256 of its custom Vera processors, marking the chipmaker's broadest challenge yet to the incumbent CPU suppliers.
The announcement comes as Nvidia's CPU strategy shifted in February when the company struck a deal with Meta involving the first large-scale deployment of Grace CPUs on their own. Now, with Vera coming to market in the second half of this year, Nvidia is moving from niche GPU companion chips to a full-service CPU vendor claiming superiority in the workloads that matter most to modern AI datacenters.
The business logic is straightforward. According to Nvidia's head of AI infrastructure, CPUs are becoming the bottleneck in AI and agentic workflows. When AI agents run tasks like database queries, code compilation, and tool calling, they cannot execute those operations on GPUs alone. They need a fast CPU sitting alongside the accelerators. Intel and AMD's server CPUs are designed for general-purpose computing; Nvidia's new Vera processor is built specifically to avoid becoming the weak link in an AI pipeline.
Nvidia claims its new Vera CPU Rack achieves a 6x gain in CPU throughput and twice the performance in agentic AI workloads. The hardware accomplishments are substantial. Vera features 88 custom Olympus cores with spatial multithreading capability and uses LPDDR5X memory to deliver up to 1.2 TB/s of memory bandwidth. Nvidia says Vera is 50% faster and twice as efficient as traditional rack-scale CPUs.
To understand the scale difference: AMD's EPYC line and Intel's Xeon CPUs typically have 128 cores, compared to 72 cores in Nvidia's Grace CPU. Nvidia deliberately chose fewer cores because agentic AI does not require massive parallelism; it requires speed and low latency. Nvidia designed its CPU specifically to help its GPUs run AI workloads, with single-threaded performance taking priority over total core count to ensure expensive GPU resources are not left idle.
The competitive threat is real, particularly for Intel. Intel told CNBC it expects inventory to hit its lowest level in the current quarter, but said it is addressing the situation and expects supply improvement in Q2 through 2026. Supply constraints are industry-wide, but they create an opening for a well-positioned new entrant. According to chip analyst Ben Bajarin, the Meta deal alone is worth tens of billions of dollars.
Consider the counterargument. Intel and AMD still dominate the broader datacenter CPU market, and their chips excel at generalised workloads: virtualisation, databases, enterprise software. Nvidia's advantage exists specifically in the AI agent context. For agentic AI, raw core count matters less when single-thread performance and low-latency data handling become the bottlenecks. A hyperscaler running traditional databases or cloud services would still choose Intel or AMD. A company building massive AI infrastructure, by contrast, now has a third option.
The market seems to believe in the opportunity. Nvidia named Amazon Web Services, Google Cloud, Microsoft Azure, Oracle Cloud Infrastructure, Alibaba, ByteDance, CoreWeave, Lambda, Nebius, OpenAI, Anthropic, Meta and Mistral AI among the companies working with Vera or Vera Rubin systems. Vera-based products and the new Vera CPU will be available from Nvidia's partners starting in the second half of this year.
For Australian cloud providers and enterprises, the implications are clear. Any significant AI infrastructure investment will now mean evaluating three CPU suppliers instead of two, and the decision will hinge on whether your workloads are GPU-centric. Bank of America predicts the CPU market could more than double, from USD 27 billion in 2025 to USD 60 billion by 2030, suggesting the total market is expanding fast enough for all three to grow even as Nvidia gains share in the AI-specific segment.
The real question facing Intel and AMD is whether they can adapt. While GPUs remain essential for training and inference, emerging AI workloads require modern datacenters to be collaborative CPU-GPU systems, and as AI systems evolve toward reinforcement learning, autonomous agents and retrieval-heavy architectures, the CPU is reasserting itself as a critical control-plane and efficiency engine. Both established CPU makers have room to innovate, but Nvidia's head start in understanding GPU-CPU interaction and its ability to co-design entire stacks creates a genuine structural advantage that pricing and marketing alone cannot overcome.