Over the last year, computer companies have been diving deeper into cloud computing, with Hewlett Packard making the controversial decision to spin out its information technology business, and Dell buying data storage company EMC last year. The widespread use of cloud computing is not only making for more competition, but also leading to new advances in servers that process data from the cloud.
Now, researchers have built a transceiver that could speed up data transfers between optical modules and semiconductor chips used in servers and switches. Built by Fujitsu Laboratories and system-on-chip designer Socionext, the transceiver can transmit data at 56 Gbits/s per channel—twice the speed of current transceiver standards.
At the same time, the researchers were able to make the transceiver faster without adding powerful new circuits. They instead combined several existing circuits in order to reduce the total power the transceiver uses. The new design only requires eight circuits, as opposed to the 16 used by current transceivers (which transfer data at 28 Gbits/s). According to the researchers, this means the new transceiver can transmit data twice as fast without consuming any additional power.
The research could represent a major step toward reducing the amount of electricity used in data centers, which are notorious for swallowing huge amounts of power. There are limits to the amount of electricity that can be supplied to these data centers, owing to the cost of supplying power to the servers and the significant amount of heat radiating from them.
The researchers eliminated half the circuits using a new method for detecting time errors between circuits. The research team found that they could remove parts of the clock and data recovery (CDR) circuit, which reads electrical signals and adjusts the timing of other circuits to ensure that they can read incoming data accurately.
The CDR is normally paired with a so-called decision feedback equalizer (DFE), a circuit designed to boost weakened signals to ensure that they can be reliably transmitted between servers. The signals going in and out of the transceiver are more subject to degradation as communications speeds increase. The research focused on the CDR and DFE because they consume about two-thirds of the transceiver’s total power.
In conventional transceivers, the DFE boosts the weakened signal depending on whether other circuits have determined that the signal represents 1 or 0. Because they operate on different clocks, the CDR will adjust the DFE’s timing to ensure it captures the signal’s waveform at the greatest amplitude, or when it’s clearest.
The breakthrough came when the research team found that they could determine whether the DFE’s timing was early or late without using the CDR. The researchers developed a new timing detection method that only detects the timing when three consecutive bits of the incoming signal are 100 or 011. With this new approach, the team could remove the CDR and other circuits, such as the internal clock, combining them with the DFE.
The researchers said that the new technology is compatible with an upcoming 56 Gbits/s standard from the Optical Internetworking Forum, an organization that promotes computer networking technologies. Fujitsu Laboratories and Socionext plan to use the technology in components linking optical modules and chips. New products using the technology are expected in 2018.