Originally published at: NVIDIA 800 V HVDC Architecture Will Power the Next Generation of AI Factories | NVIDIA Technical Blog
The exponential growth of AI workloads is increasing data center power demands. Traditional 54 V in-rack power distribution, designed for kilowatt (KW)-scale racks, isn’t designed to support the megawatt (MW)-scale racks coming soon to modern AI factories. NVIDIA is leading the transition to 800 V HVDC data center power infrastructure to support 1 MW IT…
Copper overload: The physics of using 54 V DC in a single 1 MW rack requires up to 200 kg of copper busbar. The rack busbars alone in a single 1 gigawatt (GW) data center could require up to half a million tons of copper.
May I know how you came to this conclusion? @jwitsoe
1 GW = 1000 MW → 200 kg/MW x 1000 = 200,000 kg = 200 tons.
This is 2,500 times less than your stated figure of “500,000” tons, a number that appears highly unrealistic.
I believe that should have said “half a million pounds of copper”.
Is anyone working on direct 800V to POL conversion for this new architecture or do you always envision having an intermediate bus voltage?
- / - 400 VDC is much safer than 0-800VDC and much easier to get connectors, cables and other parts for these type of designs - So why 0-800VDC? A few electric car companies use it (not Tesla) but people don’t work around and inside those
not completely now there is one dangerous conductor thats is easier to protect while bipolar systems in high current systems are very complex to manage the voltage an to use solid state protection. in unipolar 800V is more simple to have a single SSCB to protect. for more info www.currentos.org the 700V DC is similar like the 800V used by Nvidia see voltage bands.
Unfortunately, the network topology is not specified in the article. However, a 2-wire DC network sounds like an isolated network or an AC-side grounded network according to ODCA specifications. This means that the same voltages can be expected line to line and line to earth as in a +/- 400V DC network, but without the side effects of a current-carrying ground.
Not bad, but given the increased share of renewables, I would also think about having DSM (Demand Side (clock) Management) and MPPT (Maximum Power Point Tracking) capabilities - just regulate GPU’s clocks with available power/voltage… (So that thing can run directly off solar for not time critical applications :-)
But for simplicity of voltage conversions an 400V DC bus is a much better proposition - nearly any good 240V AC SMPS with DC-mode aware APFC can run off it… (try finding and ordering some 900V electrolytic capacitors ws 450V ones :-D)
We think galvanically isolated direct high-power conversion from MV AC to LV DC (aka Solid state transformer or SST) is advantageous for builders and owners of hyperscale data centers.
+ Versatile and capable operations with one single 11 MW converter
+ Infrastructure and installation savings
+ Enhancing grid quality and stability
We already have a unit built for testing and field deployment. Anyone interested?
An interesting article, but I was missing where emergency generators and UPSes would fit into the architecture. It looked like a good starting point for a design, but not yet completed and doesn’t yet take all real-world needs into consideration.
This is an impressive step forward in large-scale AI infrastructure. The use of ±800V HVDC architecture will not only improve power efficiency but also reduce transmission losses across NVIDIA’s AI data centers. It’s exciting to see how this shift toward high-voltage direct current systems will support next-gen AI workloads while maintaining sustainability goals. Looking forward to seeing the implementation details and performance metrics once deployed.
Is it supposed to have significant parts of this architecture on PCBs? Who would supply you with such products? Can they guarantee proper isolation coordination?
The power network described doesn’t discuss the redundancy (minimum N+1) for power and cooling provisions to ITE that is a core expectation in a non-AI environment. Would the same design strategies be relevant for AI facilities as for non AI-facilities i.e. block or parallel redundant type reticulations?
How will the necessary capability for switching and safe isolation for the various elements of the 800Vdc system be implemented? Will there be a need for switches that can safely (and physically?) isolate 800Vdc.
This is very interesting. How does my company become a part of this electrical ecosystem collaboration?
Where will the voltage tolerance on that 800V end up, +/-10%, +/-5%?
This is a really interesting direction, especially with AI workloads pushing power requirements so quickly.
Moving from 54V to 800V HVDC seems like a necessary step to handle MW-scale racks efficiently.
It’ll be interesting to see how fast the industry can adapt to this kind of infrastructure change.
Definitely a big shift for future data center design