SFI Will Challenge, Reward Computer Designers

Nov. 10, 2003
While switch-fabric interconnect (SFI) technology is common in communications applications, its recent migration to the computing environment is gaining a lot of attention. The design considerations in this more challenging arena are dramatically...

While switch-fabric interconnect (SFI) technology is common in communications applications, its recent migration to the computing environment is gaining a lot of attention. The design considerations in this more challenging arena are dramatically different, involving latency, topology, aggregate throughput, and other performance-related issues.

The change from a multipoint or parallel backplane to the emerging SFI solution creates some major challenges for computer designers. Unlike in communications, where it was simply a matter of transitioning the external topology to the backplane, computer designers must seriously consider the application of SFI technology in the backplane itself, because many normal SFI attributes are not easily adaptable to this application.

SFI typically uses the packet concept, which works well in communications applications: sending information between systems, not just across a backplane. This proved very effective for sending data over a single line, but of lesser value in the backplane realm, where the cost efficiencies of one line versus two aren't that significant.

For a single CPU computer, where everything within the system is coordinated and directed through the processor, the advantages of multiple channels are minimal. Once the single-processor computer has separate microcontrollers embedded into each function card to control its own operations, the multiple independent channels of the SFI will be extremely attractive. Right now, the single-channel bandwidth of a cost-effective switch-fabric solution doesn't compete with a multipoint solution.

In the multiprocessor server environment, the transmission and setup latency encountered in today's SFI implementations are quite significant. The transmission latency in a multipoint environment, for example, includes the propagation delay in the two endpoint transceivers and in the length of the trace between the two. In the case of an SFI, several other factors come into play: the latency of the trace to the switch, serialization at the starting point, de-serialization at the switch, re-serialization at the switch, delay through the switch, the trace from the switch, and de-serialization at the final endpoint. Also, at one or two points, elastic buffer delays are needed to adjust between clock boundaries. All of this adds up to a lot of potential transmission delay.

Data pipelining, another issue for the computing environment, may become problematic due to setup latency issues. Typically, only one packet at a time may be transferred, unless the cross-point switch contains significant memory. While a packet is in transit, the fact that nothing can be done to set up the next packet transfer causes potential gaps in the data stream. Setup latency may be compounded by the possibility of sending a packet to an output port that's already busy with another request. In that case, unless there is a store and forward buffer, the packet is rejected and recovery techniques must be implemented to resend the packet. One way to solve many of these latency problems is to use a more expensive mesh backplane, though cost may become an issue.

Unlike in computers, where latency and nonpipelined data cause the processor to waste performance on wait states, latency is better tolerated in communications applications, where the transmission of data packets operates quite satisfactorily.

In the face of all these issues, how can SFI technology be effectively applied to the computing environment? Here is a possibility worth considering: Break out the data, using a separate set of lines to send the address and control commands.

One primary advantage of this is the reduction of both transmission and setup latency to almost nothing. Also, multiple requests could be sent to the switch for pre-arbitration as the previous data stream is sent. Multiple requests would eliminate a great deal of the setup latency and increase the probability of addressing an open output port. The design could also use the address/control lines for messages, doorbelling, and short broadcasts, freeing the datapath for data. Finally, you could prioritize specific ports (i.e., microprocessors over cache) to preempt requests from lower-priority ports.

Other functions not exclusively requiring this separation could also be implemented to improve the overall operating environment for computing system applications.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!