The Liquid-Cooled Plumbing Behind AI’s Power Surge

Following a strategic approach to cooling, power delivery and rack-level integration will help data centers boost AI compute density.
March 4, 2026
9 min read

What you'll learn:

  • Why space limitations, power-delivery constraints, cooling inefficiencies, and sustainability pressures present challenges for scaling legacy data centers. How strategic, rack-level integration of cooling, power, and compute capabilities enables greater hardware interoperability and ensures optimal thermal performance.
  • Why integrated, modular, and liquid-ready rack designs help operators support larger AI workloads within their existing walls.

The compute demands of AI workloads are skyrocketing. The instinctive response? Build more data centers. But that strategy is becoming harder to execute. Besides the practical challenges of constructing a giant data center, the electric grid’s capacity is finite and even connecting to the grid is posing difficulties. Add in the financial and environmental costs of new facilities, and it’s clear that scale alone isn’t the answer.

The more effective strategy, though not as easy and straightforward, is to unlock more performance from existing footprints through targeted infrastructure upgrades that extend capacity, improve efficiency, and accelerate deployment.

The challenge is that in each new generation, GPUs and other AI accelerators are driving up power consumption and thermal output. These leaps are rendering legacy designs obsolete and forcing a fundamental shift in infrastructure strategy. For example, take NVIDIA’s GB200 Superchips. Each module, which combines two Blackwell GPUs and a Grace CPU, can draw up to roughly 2,700 W of power, highlighting how quickly thermal loads are climbing in next-generation AI systems.

With the right approach to cooling, power delivery, and rack-level integration, data center operators can significantly increase compute density and performance without breaking ground on new facilities

Understanding the Constraints on Compute Density in a Data Center

By and large, legacy data centers aren’t built for the demands of large language models (LLMs) and other high-density AI workloads. Scaling within these environments presents four critical challenges:

  1. Space limitations: Existing rack layouts often lack the physical capacity to accommodate modern, high-density configurations.
  2. Power-delivery constraints: Traditional power distribution units (PDUs) and switchgear were never designed to feed racks with power densities of 60 to 120 kW or more.
  3. Cooling inefficiencies: Even at maximum airflow, traditional air-based cooling systems can’t dissipate the thermal loads of modern AI processors.
  4. Sustainability pressures: As data centers account for an increasing share of global electricity use, operators face growing scrutiny from regulators, investors, and customers to improve energy efficiency.

Each of these constraints impact reliability, uptime, and return on investment. Engineers and system integrators are eyeing ways to deliver performance gains without taking things offline, which is no small feat when every watt and square foot counts.

Direct-to-Chip Liquid Cooling Uncorks More Power for AI

Cooling is the most immediate and visible constraint for high-performance computing. Air-based systems have reached their practical limits. Even with raised floors, containment aisles, and optimized airflow, traditional setups can’t keep pace with the thermal profiles of AI-grade silicon.

Direct-to-chip liquid cooling has emerged as a potential solution. By routing coolant directly to the chip surface and precisely targeting hot spots, this method drastically improves heat transfer efficiency, stabilizes CPU and GPU temperatures, and minimizes temperature gradients that compromise performance.

Despite its technical advantages, liquid cooling is often misunderstood as requiring large-scale infrastructure overhauls. But that’s not always the case.

>>Check out this TechXchange for similarly themed articles and videos

Roman Snytsar | Dreamstime
Promo Roman Snytsar Dreamstime Xxl 7193023
There are many ways to keep a design cool, including heat pipes and vapor chambers

Self-contained, closed-loop liquid-cooling systems can offer a practical, incremental way forward. These solutions integrate directly into standard server configurations, delivering up to 1,200 W in 1U and 1,500 W or more in 2U — without external piping or distribution units. Operators can achieve 15% power savings on average by significantly reducing airflow requirements, all within existing rack footprints and power budgets.

For greenfield deployments or major facility upgrades, facility-level liquid cooling deserves serious consideration. These systems can support over 3,000 W per socket and enable rack densities approaching 1 MW.

While plumbed liquid cooling requires more infrastructure and may occupy more physical space per rack, it allows organizations to dramatically densify compute within the same overall data center footprint. This means you can scale AI workloads without expanding your facility. The upfront investment is higher, but the long-term ROI is compelling — higher compute density, lower energy consumption, and improved reliability.

For organizations looking to maximize performance without building new data centers, designing around advanced cooling can unlock major efficiency gains and future-proof infrastructure for AI at scale.

Liquid cooling doesn’t have to be an all-or-nothing decision. Operators can start small, deploying self-contained systems that integrate seamlessly into existing environments. By improving thermal transfer at the chip level, these solutions unlock new headroom, enabling higher power densities, better performance, and greater efficiency within the same footprint. In essence, liquid cooling makes it possible to densify compute without expanding space or power budgets.

Rethinking Power Distribution and Delivery for Megawatt Racks

Cooling innovation alone won’t solve the challenges of next-generation compute. As rack densities surge past 120 kW — and with hyperscalers adding 1-MW rack architectures to their roadmaps — the industry faces a fundamental rethink on how we deliver and distribute power. Legacy data centers, many still outfitted with PDUs and switchgear designed for 5- to 15-kW loads, are increasingly misaligned with the demands of AI-optimized infrastructure.

This mismatch is a power liability. Upgrading power architecture is no longer optional; it’s a prerequisite for scalability, efficiency, and resilience. High-efficiency PDUs, DC busways, modular power shelves, and disaggregated power rack platforms are enabling safer, more efficient power delivery for AI workloads.

Hyperscalers are leading a shift toward disaggregated infrastructure, separating power, cooling, and IT into modular, independently scalable components. Google's Mt. Diablo project, for example, introduces AC-to-DC sidecar power racks delivering ±400 V DC, enabling up to 1 MW per rack and reclaiming valuable space inside the IT rack for compute. This approach allows operators to increase density without expanding their physical footprint.

This innovation is now engaging a broader ecosystem of solution providers. Companies like Flex are not merely responding, they’re architecting the future.

At OCP Global Summit 2025, Flex introduced its AI infrastructure platform — a fully integrated solution geared for  gigawatt-scale data centers. The platform features 1-MW racks with ±400-V DC power and supports the transition to 800-V DC power architectures, modular cooling up to 1.8 MW, and prefabricated systems that dramatically reduce deployment timelines. It’s a model for scaling smarter, maximizing density and speed without growing the footprint.

Smart Power Management: The Unsung Part of the AI Equation

Equally important is how we manage power. Stranded capacity — unused energy due to uneven distribution — remains a silent drain on operational efficiency. Software-defined and modular power systems allow for dynamic allocation, ensuring power is delivered precisely where it’s needed. This not only improves utilization and reliability, but it also reduces the risk of overprovisioning and unnecessary capital expenditure.

As data centers adopt DC power distribution, other technologies like solid-state transformers aim to streamline conversion stages. They help boost energy efficiency and enable a dramatic reduction in electrical room footprint — up to 90%, according to Flex’s estimates, by 2030. This yields two big benefits: lower construction costs by achieving capacity in a smaller footprint, or increased compute density by adding more racks within the same envelope.

For system integrators, the challenge is execution without disruption. Downtime isn’t an option. That’s why we’re seeing a shift toward hot-swappable, front-access designs that simplify installation and maintenance in live environments.

Rack-Level Integration: Where Cooling, Power, and Compute Converge

Actual density gains occur when cooling, power, and compute stop operating in silos and start working together at the rack level. Instead of treating each subsystem independently, forward-thinking leaders are adopting architectures that consolidate these functions into a unified ecosystem.

By following this approach, integrated racks based on open standards enable interoperability across hardware generations, simplifying maintenance, while liquid-cooled designs ensure optimal thermal performance. This model supports long-term scalability as chip thermal design power (TDP) continue to rise.

One of the most effective approaches is deploying turnkey, vertically integrated liquid-cooled rack solutions that combine power delivery, thermal management, and IT hardware into a single, pre-engineered system. These solutions eliminate the need for complex multi-vendor integration, offering faster deployment, simplified operations, and a single point of accountability.

Partnerships also matter. By working with solution providers who deliver fully integrated, rack-level systems complete with matched cooling, power, and compute components, operators get a single point of contact and streamlined warranty coverage. This cuts through complexity, reduces risk, and accelerates deployment, letting data centers scale faster without the usual integration headaches.

For system integrators, these architectures create new opportunities to provide value through interoperability testing, performance validation, and the deployment of pre-integrated rack solutions for optimized AI workloads. By pairing advanced liquid cooling with high-efficiency power delivery, operators can reduce total facility energy use even as computing output rises, cutting rack-level power consumption by several kilowatts compared to air-based systems.

These efficiency gains translate directly into higher compute density per square foot. When cooling and power systems operate more efficiently, racks can support higher wattage and thermal loads without exceeding facility limits. That means more compute per floor tile, maximizing the value of existing real estate and delivering greater performance without expanding the data center footprint.

High-voltage DC busbars and titanium-rated power supplies push conversion efficiency even higher, reducing waste heat and downstream cooling requirements.

Packing New Infrastructure in the Same Space

The data center of the future doesn’t have to be larger to support the needs of AI; it just needs to be designed more strategically. By focusing on the right infrastructure upgrades, operators can unlock the density and performance required for AI workloads within their existing walls.

Cooling and power delivery are the most crucial starting points. Engineers and system integrators who tackle these challenges together — through integrated, modular, and liquid-ready designs — will be able to evolve their data centers in tandem with each new generation of compute hardware, thereby properly supporting increasingly power-hungry AI solutions.

By rethinking infrastructure, you can get more out of the footprint.

>>Check out this TechXchange for similarly themed articles and videos

Roman Snytsar | Dreamstime
Promo Roman Snytsar Dreamstime Xxl 7193023
There are many ways to keep a design cool, including heat pipes and vapor chambers

About the Author

Bernie Malouin

Bernie Malouin

Founder of JetCool Technologies and VP of Flex Liquid Cooling

Bernie Malouin founded JetCool Technologies after eight years at MIT Lincoln Laboratory. There, he served as the Chief Engineer leading the technical development of a $100M+ airborne payload program for the U.S. Government. Bernie received a BS in Mechanical and Aeronautical Engineering (RPI) and a Ph.D. in Mechanical Engineering (RPI). You may find him fixing a tractor, flying an airplane, or bicycling a rail trail.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Electronic Design, create an account today!