The AI race is running on massive amounts of electricity. U.S. demand for electricity is projected to climb more than 15% by 2030, with data centers among the primary drivers, according to a recent analysis by power sector consulting firm Grid Strategies. To keep up, tech giants such as Google, Meta, Amazon, and Microsoft are all scrambling to secure large amounts of reliable power for their future AI operations, increasingly turning to nuclear power as a way to bypass bottlenecks in the grid.
But as companies focus on supplying power to these data centers, a parallel challenge is arising inside the facilities themselves. Today's graphics processing units (GPUs) and other AI accelerator chips are burning through more than 1000W to run computationally intense workloads, including AI training and inference. These escalating demands are pushing existing power architectures to their limits, with rack-level power densities rising from 10 to 40 kW to as high as 100 to 200 kW in a short span.
As AI chips become more power-hungry, the problem is not going away. The power demands of machine learning are projected to push power-per-rack specifications to more than 500 kW before 2030. The surge is spurring power electronics and systems engineers to reevaluate the entire power architecture in data centers. It is no longer enough to simply deliver more power—it must be converted and distributed efficiently across every stage of the power chain to minimize losses as heat.
For Further Insights into the Data Center:
At the rack level, technology giants are collaborating on the shift from the industry-standard 48 V DC power distribution to +/-400 V or 800 V DC. At the server level, semiconductor firms are upgrading voltage regulators to sling power faster and more efficiently over the "last inch" of the circuit board and into the point of load (POL) at the GPU or other SoC. Additionally, chip designers are rewiring the power networks inside the processor and the package to improve efficiency.
In this roundup, Electronic Design dives into the details of designing, validating, and testing power delivery solutions for today's data centers. It will cover everything from the complexities of power supplies and the intricacies of voltage regulators to the EDA and test and measurement tools for navigating the issues. In the future, it will elaborate on the challenges of signal (SI) and power integrity (PI)—crucial to designing optimal power delivery networks (PDN)—as well as thermal management.
Do you have thoughts about what we should cover next? Please leave a comment or respond to the survey below.
The End of the Line for 48-V Power Architectures?
Under the Hood of the Latest AI Power Supply Units
The UPS: Rugged, Reliable Backup Power for Data Centers
Hot Swapping and High-Current Circuit Protection
Voltage Regulators: Bringing Power to the Point of Load
About the Author
James Morra
Senior Editor
James Morra is the senior editor for Electronic Design, covering the semiconductor industry and new technology trends, with a focus on power electronics and power management. He also reports on the business behind electrical engineering, including the electronics supply chain. He joined Electronic Design in 2015 and is based in Chicago, Illinois.