When artificial intelligence workloads shifted from algorithms running on traditional CPUs to GPU clusters running neural networks, they did not just demand more power. They demanded a fundamental reimagining of how electricity flows through a datacenter. The comfortable assumptions baked into decades of infrastructure design have become liabilities.
For years, datacentres operated on a simple formula. Racks drew 10 to 15 kilowatts. A 48-volt direct current (48V DC) power distribution system, standardised across the industry, handled this reliably. Thick copper busbars funnelled electricity from centralised power shelves to compute trays. The system was engineered to perfection.
Then accelerated computing arrived. A single modern GPU rack now consumes 100 kilowatts or more. Nvidia exhibited an 800 V sidecar to power 576 of the Rubin Ultra GPUs in a single Kyber rack, a configuration that demands megawatt-scale power delivery. At these densities, the physics of low-voltage distribution face an unavoidable crisis.
The problem is current. At 48V, delivering 100 kilowatts requires currents exceeding 2,000 amperes. The busbar must carry that current without overheating. Delivering 1 megawatt at 54V requires up to 200 kilograms of copper busbar. In a single 1 gigawatt datacenter, rack busbars alone could require up to 200,000 kg of copper. Beyond the raw material cost, the volume of copper occupies space urgently needed for airflow and liquid cooling. Connector resistance generates localized heat. The infrastructure, designed for steady workloads, breaks under the transient surges of GPU computation.
Operating at higher voltage levels can reduce the amount of copper required by up to 45 percent; lower energy losses by eliminating multiple AC-to-DC conversions; and deliver up to five-fold increases in overall power efficiency compared to conventional 48V methods. This is why the industry is converging on 800-volt high-voltage direct current (800V HVDC) architecture.
The economics are stark. At 800V, the same current of 100 kilowatts requires only 125 amperes instead of 2,000. Power loss scales with the square of current. This non-linear relationship means that halving the current reduces resistive losses by 75 percent. To power a 1MW rack, today's 48V distribution system would require almost 450 pounds of copper, making it physically impossible for a 48V system to scale power delivery to support computing needs in the long term.
The architecture is cleaner than the legacy approach. Instead of converting AC power to 48V at the rack level, then stepping down again inside each server, the new architecture centralises power distribution in AI data centres, allowing power to be converted from AC to DC at the GPU level, within the server board, minimising energy losses. The new architecture is expected to improve end-to-end efficiency by up to 5%, reduce maintenance costs by up to 70%, and lower cooling expenses, enabling sustainable growth and cutting total cost of ownership by up to 30%.
The transition is already underway. Nvidia is leading the transition to 800V DC data centre power infrastructure to support 1 MW IT racks and beyond, starting in 2027, in collaboration with key industry partners. ABB, Eaton, and Nvidia are advancing the next phase of AI power infrastructure, collaborating on 800V DC architectures to support megawatt-class racks and gigawatt-scale campuses, spanning switchgear, UPS, and automation systems to enable denser, liquid-cooled facilities with intelligent energy management and modular, prefabricated power blocks.
The shift does introduce new constraints. The electrical, mechanical, thermal, and safety challenges associated with 800V require fresh thinking and technological prowess, including innovations in solid-state circuit breakers for safety and reliability, more compact intermediate bus converters to step down power efficiently, and liquid cooling systems to manage intense heat produced by high-density compute environments. Workers will need training on high-voltage safety protocols. Component vendors including Texas Instruments, Infineon, and ST Microelectronics are collaborating with Nvidia to develop power semiconductors and converters that can handle the transition reliably.
CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure, and Together AI are among the companies designing for 800-volt data centres. Some operators are deploying 800V infrastructure today as a sidecar solution alongside existing 48V racks, allowing a gradual transition rather than a complete teardown.
The historical parallel is revealing. Nvidia cites marine vessels' 1,000 VDC systems as saving 20 to 40 percent energy and having 30 percent lower maintenance costs. Naval engineers solved this problem decades ago. The datacenter industry is now applying those hard-won lessons to the AI era. What was once a feature of specialised infrastructure is becoming the standard for commodity computing. Physics does not negotiate upgrade budgets.