Hewlett Packard Enterprise used Nvidia's GTC conference to roll out a broad expansion of its artificial intelligence portfolio, introducing new systems for enterprise, sovereign and high-performance computing deployments. The announcements reveal a fundamental shift in how organisations are approaching AI: they no longer want isolated experiments. Enterprises are no longer satisfied with isolated AI experiments and instead want repeatable, governed systems that can deliver measurable returns. The winners are not just optimising around isolated projects, but standardising how AI operates inside their enterprise, driven by growing anxiety about the economics of AI.
HPE has expanded its Nvidia-based AI portfolio with new systems built on Blackwell and upcoming Rubin GPUs, alongside updates to its Alletra Storage MP X10000, which it claims is the first object storage platform to achieve Nvidia-Certified Storage validation. The next-generation Nvidia Vera Rubin NVL72 by HPE is a flagship AI system engineered for frontier scale models in excess of 1 trillion parameters. This system delivers high efficiency at scale with 36 Nvidia Vera CPUs, 72 Nvidia Rubin GPUs, sixth generation Nvidia NVLink scale-up networking, Nvidia ConnectX 9 SuperNIC, and Nvidia BlueField 4 DPUs along with HPE's liquid cooling integration, services, and expertise for data centre design.
The data pipeline problem nobody mentions
As AI infrastructure moves into production, data pipelines and specifically inference context have emerged as a critical performance bottleneck. HPE is working closely with Nvidia to accelerate every stage of the AI data lifecycle, from ingest and vectorisation to inference and recovery. Nvidia has validated and benchmarked the Alletra Storage MP X10000's performance for workloads of up to 128 GPUs, conducted functional tests for enterprise grade availability and reliability, and confirms that the storage layer efficiently feeds data to accelerated computing resources to deliver faster model training, lower latency inference, and better overall utilisation.

Sovereign AI takes shape in Europe
The company is announcing new Nvidia powered AI Factory and Supercomputing ranges, which include AI grids and enable so called sovereign AI in Europe and the US. HPE is building a supercomputer for the European Union AI Factory, HammerHAI, with the High Performance Computing Center Stuttgart managing the effort. The integrated approach will help researchers, startups, and enterprises access AI resources while operating in alignment with European Union data security requirements. This addresses a genuine concern: organisations in regulated jurisdictions have long felt forced to either accept US cloud dependency or build everything themselves.
What's available now, and what's still vapourware
Nvidia RTX PRO 6000 Blackwell Server Edition GPUs are available in the HPE AI Factory portfolio today. The Nvidia Vera Rubin NVL72 by HPE rack scale system will be available in December 2026. The new network expansion racks for HPE Private Cloud AI for scaling up to 128 GPUs will be available in July. The new HPE Compute XD700 is an Open Compute Project inspired AI server based on the Nvidia HGX Rubin NVL8 liquid cooled AI platform, arriving early 2027.
The real question is whether these systems can deliver on the promise of lower total cost of ownership compared to cloud alternatives. The Nvidia Rubin platform is said to deliver up to a tenfold reduction in inference token cost and a fourfold reduction in the number of GPUs required to train mixture of experts models compared to prior Blackwell platforms. If those claims hold up in real world deployments, the economics suddenly favour building out on premises infrastructure rather than renting GPU time from hyperscalers.
Still, the cash flow maths matter more than the marketing. HPE Financial Services is making it easier to advance AI and modernisation projects with a new 90/9 Advantage financing programme, requiring no payments for the first 90 days, followed by monthly lease payments of 1 percent for the next 9 months. The offer is available across the networking, hybrid cloud, and compute server portfolios. Translation: HPE is removing financing friction because the traditional upfront capital cost is the actual blocker for most organisations.
The HPE and Nvidia partnership is fundamentally sound. The infrastructure works. The question is whether enterprises will actually commit to building sovereign AI factories, or whether the gravitational pull of US cloud dominance proves too strong to escape. The next 18 months will reveal which way the market is actually leaning.