Molex and iStock
Data Center Istock 63f62dcfdc619

New Tech Helps Keep High-Speed Serial Connections Cool

Feb. 22, 2023
Speeds continue to climb in data centers, spurring Molex and other companies to come up with new ways to manage thermals.

Check out more coverage of DesignCon 2023.

As the world amasses increasing amounts of data, technology companies are trying to move it all faster. To keep up with the need for speed in their sprawling data centers and other systems, they’re turning to a new generation of high-performance chips that bring data rates of 28, 56, and 112 Gb/s to the table.

But such chips, which frequently sit at the heart of switches, servers, and other gear in a modern data center, are useless if they can’t communicate at such speeds with other parts in a system. It’s up to engineers to resolve the signal integrity (SI), power integrity (PI), electromagnetic compatibility (EMC), and other issues that arise with every new generational leap in throughput.

The high speeds also present engineers with challenges when it comes to managing the excess heat they create, specifically inside the input/output (I/O) modules that plug into the bandwidth- and power-hungry switches in data centers. Such modules are seeing upgrades to 200-, 400-, and even 800-Gb/s transmission rates. These connectors route data into optical or copper cables that send it over longer distances in the system.

At this year’s DesignCon, leading players in the connectivity industry, including Molex and TE Connectivity, showed off new connector technologies that claim to better dissipate heat from pluggable I/O modules.

Keeping I/O modules from overheating is a high priority, as excess heat is one of the enemies of high-speed signals. Heat wears out components faster as well, requiring systems to be cooled effectively to prevent failures.

Hasan Ali, associate manager of new product development at Molex, pointed out that only several years ago it was a challenge to cool these modules when they consumed 10 W. But as more data funnels through them, they have to expend even more power—30 W or more in the future—while staying within the same form factor.

Traditional thermal-management solutions, he said, are not going to be robust enough to get the job done.

Getting the “Drop” on Heat

The new thermal-management solution Molex showed off at DesignCon is called the “drop-down” heatsink. In a demo, Molex, partnering with Multilane, used it to cool a quad small-form-factor, pluggable “double density” (QSFP-DD) I/O module, which is widely used in high-bandwidth switches in the data center (Fig. 1).

While companies have access to a wide range of technologies to beat the heat, ranging from heat pipes to “zipper fin” heatsinks, thermal management has long been a challenge when it comes to I/Os. At issue is the “dry” metal-to-metal contact between the power-hungry module and heatsink above it. Ali warned that dry contact, due to its high thermal contact resistance, puts a limit on optimizations to the heatsink itself.

One way to pull heat out of the system is to use of a thermal gap pad or other thermal interface material (TIM) between the module and heatsink mounted on top of it. The tradeoff, as he points out, is durability.

Every time the connector is plugged into (or unplugged from) the cage, friction is created that can damage the materials. The sharp edges on the module and other geometric constraints also may interfere with it.

According to the company, the drop-down heatsink solves the problem by removing friction from play. As the module is being inserted into the cage, the heatsink never makes direct contact with the module until it’s around 90% inserted. The lack of contact protects the thermal material plastered on the heatsink, both from sharp edges on the front of the connector and any sort of angled insertion due to cable loading.

The remaining 10% of the insertion is where the heatsink “drops down” through a proprietary mechanism (that the company declined to detail) and makes intimate contact with the module being inserted. Dropping the heatsink down this way protects the material for more than 100 insertion cycles, said Molex.

Compared to a traditional solution, the new heatsink can whisk away up to 9°C more heat from the point where the digital signal processor (DSP) that amplifies data down the cable is usually located in the module (Fig. 2).

“Bridging” the Thermal Gap

As the insides of systems become more cramped, there’s limited space left for heat to dissipate or air to flow.

TE’s “thermal bridge” technology, also on display at DesignCon, is a mechanical alternative to a traditional TIM or gap pad. It leverages a series of spring-loaded metal plates to better whisk heat away from I/O modules.

TIM materials are usually spongy, rubberized materials that act as a conduit between a hot module and a cold plate. They’re also elastic, so they do a better job when coming in contact with the module and pulling out heat.

The thermal bridge, first introduced in 2019, is designed to sit between the pluggable I/O and the cold plate, heat pipes, or ganged heatsinks mounted on it. It’s composed of a series of interleaved, parallel plates integrated with springs that compress as the connector is plugged into the cage. Thus, the bridge can adjust to distances between the module and heatsink as excess heat travels between them.

Traditional thermal pads or other materials may have to be squeezed against the module to make sure all of the heat transfers into the heatsink above it effectively. In many cases, this may require external hardware.

TE said the thermal bridge, which promises to reduce thermal resistance where the module and heatsink make contact by 20% to 40%, is spring-loaded to control the force applied to other parts of the system, with one millimeter of compression built into the device.

The thermal bridge brings up to 2X better thermal resistance than traditional solutions such as gap pads, according to TE. It supports the company’s single, stacked, and ganged QSFP, QSFP-DD, SFP, and SFP-DD connectors (Fig. 3).

High Speeds Turn Up the Heat

Keeping things cool in the data center is only going to get more difficult as data rates continue to climb.

The current state-of-the-art in the world of high-speed serial interfaces is 112-Gb/s SerDes. But everyone from semiconductor giants to connector and cable manufacturers are taking steps to usher in 224-Gb/s PAM-4.

As switches, servers, and other systems in the data center upgrade to these speeds, they’re bound to get even more power-hungry, forcing vendors to get more creative when it comes to thermal management.

One of the advantages of Molex’s new heatsink is that it enables more power to be pumped into the I/O modules at the same temperature—and without any changes to how the system is designed or operated.

The cooling properties of the new technology also open the door to customers looking to dial down the speed of the fans blowing heat out of a system. “By replacing the traditional heatsink solution with [the drop-down heatsink], customers will be able to run the fans at a lower duty cycle if I/O is the limiting factor,” said Molex’s Ali.

Molex is working with customers behind the scenes to bring the drop-down heatsink to next-generation systems. Volume production is expected to start by the end of 2023.

Check out more coverage of DesignCon 2023.

About the Author

James Morra | Senior Staff Editor

James Morra is a senior staff editor for Electronic Design, where he covers the semiconductor industry and new technology trends. He also reports on the business behind electrical engineering, including the electronics supply chain. He joined Electronic Design in 2015 and is based in Chicago, Illinois.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!