Nvidia announced the launch of computing platforms for orbital data centres on Monday during its GTC 2026 conference, marking the company's push into space infrastructure alongside its terrestrial chip dominance. The Vera Rubin Space Module is designed for orbital data centres running large language models and advanced foundation models directly in space, with a tightly integrated CPU-GPU architecture and high-bandwidth interconnect.
Compared with the Nvidia H100 GPU, the Rubin GPU on the module delivers up to 25x more AI compute for space-based inferencing. The announcement reflects a broader shift within Nvidia away from pure GPU dominance. The company unveiled its new Vera CPU Rack architecture, which brings 256 liquid-cooled CPUs into one rack for CPU-centric workloads, claiming a 6X gain in CPU throughput and marking Nvidia's entry into direct CPU sales, positioning itself as a competitor to Intel and AMD in the traditional CPU market.
Early interest appears genuine. Aetherflux, Axiom Space, Kepler Communications, Planet Labs PBC, Sophia Space and Starcloud are using Nvidia accelerated computing platforms to power next-generation space missions. When Vera makes its debut later this year, Alibaba, ByteDance, Meta, Oracle, CoreWeave, Lambda, Nebius, and NScale have all committed to deploying the chips in their datacenters.
The appeal of orbital infrastructure is straightforward. Orbital data centres can use solar energy for power and don't require the enormous cooling solutions necessary to operate on Earth. As terrestrial data centre expansion faces permitting obstacles and grid constraints, space becomes theoretically attractive. Data centres will account for nearly half of U.S. electricity demand growth between now and 2030, and local officials have begun to balk at approving new server farms that swallow land, strain power grids and gulp cooling water.
Yet independent experts remain deeply sceptical about feasibility at scale. We simply don't have methods of protecting chips from radiation exposure, maintaining acceptable computing uptimes, and resupplying a facility with new components that are remotely realistic for a large-scale, commercial computing enterprise. Hardware churn is central to the scepticism around orbital AI; GPUs and specialised accelerators depreciate quickly as new architectures deliver step-change improvements every few years, and on Earth, racks can be swapped, boards replaced and systems upgraded continuously, but in orbit, every repair requires launches, docking or robotic servicing.
The structural engineering alone presents formidable obstacles. The ISS measures 109 metres tip to tip and required a decade of shuttle missions, robotic arms and spacewalks to piece together, while an orbital data centre would need to scale that up by a few orders of magnitude; a 4x4 kilometre solar array is about 3,000 times the area of solar panels on the ISS.
Researchers at Saarland University in Germany calculated that an orbital data centre powered by solar energy could still create an order of magnitude greater emissions than a data centre on Earth, taking into account the emissions from rocket launches and reentry of spacecraft components through the atmosphere, with most of those extra emissions coming from burning rocket stages and hardware on reentry, which forms pollutants that can further deplete Earth's protective ozone layer.
Space debris and collision risk present another constraint. Critics warn that large constellations could exacerbate space junk, potentially causing collisions that create debris cascades, and current projections suggest that adding thousands of data centre satellites could increase collision risk by 15-20%.
The economic threshold for viability remains uncertain. To become economically viable would require launch costs to fall below $200 per kilogram, a sevenfold reduction from current levels, with that threshold not expected until the mid-2030s. But if satellites require early replacement or if radiation shortens their lifespan, the numbers could look quite different.
Nvidia's announcement amounts to a technological bet rather than a near-term infrastructure solution. The Vera Rubin Space Module has no release date; Nvidia says it'll be available at a later date. The company is hedging its bets: its terrestrial Vera CPU rollout proceeds in the second half of 2026, whilst space remains a frontier prospect.
The real story is Nvidia's successful pivot from pure-play GPU maker to full-stack AI infrastructure supplier. Whether that stack ever extends to orbit depends on whether the physics and economics align more favourably than current evidence suggests.