One Powerful Week: A New Playbook for Data Center Power Supplies
AI is siphoning huge amounts of power from the grid. One way to manage these mounting power demands is to overhaul the power electronics that convert and distribute it around the sprawling data centers driving the AI boom.
For this takeover week, Electronic Design consulted with industry insiders and technical experts to fill in the blanks for engineers about how AI's thirst for electricity is transforming server power supplies from the inside out.
At the heart of these data centers are thousands of high-performance AI chips, which are burning through vast amounts of power to train large language models (LLMs) and run other computationally intense workloads. Next-generation GPUs like NVIDIA’s Blackwell B100 and B200 consume more than 1,000 W of power each, which is approximately 3X the power budget of a traditional CPU. These new demands are leading to a rapid escalation in data-center power densities, with power-per-rack specifications climbing from 30 to 40 kW to more than 100 kW.
But the processors themselves are not the only culprits behind generative AI's power binge. Compounding the problem are inefficiencies in how power is traditionally converted and distributed inside these colossal data centers.
After electricity enters the rack, it runs through several stages of power electronics before reaching the processor. First, power supply units (PSUs) convert high-voltage AC power into 54 V or 48 V DC before distributing it over busbars to all the servers and other hardware in the rack. Next, DC-DC converters inside the server step down the voltage, usually to 12 V, before supplying it over the motherboard to voltage regulators that translate it to the specific voltage used by the SoC. Today, the most advanced chips run on core voltages of approximately 0.8 V. These voltage regulators are the final stage in the power delivery network (PDN), routing current through the copper traces on the PCB and into the pins on the processor's package.
Every one of these conversion steps introduces losses, and the transmission lines between them add even more. These I²R losses—caused by resistance in everything from the busbars and cables in the rack to the copper traces in the server's main circuit board—are increasing as AI chips demand ever-higher currents. As much as 10 to 20% of the power entering a rack is lost as heat before it even reaches the processor. But removing all that heat imposes its own energy cost: approximately 40% of all the electricity used to run a data center is devoted to cooling.
To wring more performance out of every watt, engineers are rethinking power electronics at every stage of the system—from the grid to the processor's core. However, as rack power demands climb to more than 100 kW, the PSU has become one of the focal points. New power switching technologies such as SiC and GaN bring faster switching speeds, higher efficiencies, and better heat dissipation to the table. They are being complemented by more advanced multi-level topologies and digital controllers.
This special coverage dives into the details, highlighting articles from technical experts at Texas Instruments, Infineon, Vicor, Analog Devices, and others. You can find all the articles below. They cover everything from the complexities of high-voltage DC (HVDC) power distribution to the intricacies of power-supply design to innovations in circuit protection.
The takeover week starts on May 12, 2025, and runs through May 17, 2025. We will also assemble several of the articles into an eBook, which you will be able to download below when it's available.