Electronic Design

Switch Fabrics Optimize Communications Backplane

The appropriate topology, chassis design, and cooling management yield scalable high-performance systems.

If you're like a lot of design engineers, you want to design a high-performance communications system that's scalable, supports "five-nines" high availability (HA), and is low in cost—and you need to start now. You've seen the switched-fabric architectures available today but don't know how to achieve the optimal design of your backplane-based system. A few of the issues you face include fabric-slot placement, chassis configuration, and components involved.

Not only do switched fabrics allow far superior speed and bandwidth capabilities, but they inherently support HA designs and system scalability. Moreover, switched fabrics eliminate the need for shared bus architectures. Only one device can communicate at a time using a bus. All other devices must wait until an arbitration scheme determines that it's their turn to use the bus.

To increase the total throughput of a bus, it must be sped up or else widened. Both options usually limit the number of devices that can be effectively connected to it. With a switch fabric, each device is hooked up to every other device in the system through a network of connections. Thus, several devices can communicate simultaneously.

Also, redundancy can be built into the switched-fabric interconnections to support five-nines HA designs. The point-to-point nature of switched fabrics can enhance reliability by isolating faults to single endpoints. With buses, an errant endpoint can bring down the entire bus. Plus, point-to-point connections are inherently friendly to device insertion and removal.

Ring, Star, Dual-Star, and Full-Mesh topologies: There are several topologies for switched fabrics. PICMG 2.16 and StarFabric can use Star or Dual-Star topologies (Fig. 1a and 1b). PLX's GigaBridge uses the Ring (Fig. 1c). Future implementations of these technologies will likely move to Full Mesh for high-end applications with complete point-to-point interconnections (Fig. 1d).

A Star topology is centralized and has only one fabric slot supported on the backplane. A Dual Star has two fabric slots supported on the backplane, providing redundancy. The Ring topology uses controllers that act as a node capable of managing multiple bus segments. They're connected via a dual-counter-rotating ring to other controllers. This arrangement forms a vast network of bus segments.

Higher bandwidth and better quality-of-service (QoS) applications call for Mesh fabrics, where each node slot is interconnected to the others with point-to-point links. Also, in Mesh fabrics, each node is an endpoint that basically manages its own traffic, without a central resource. The data rates and protocols don't depend on other data transfers in other slots. So it's highly scalable, eliminating latency and determinism problems.

If you're designing a system using PICMG 2.16, StarFabric, or GigaBridge, one of the first things to determine is whether or not to implement the CompactPCI (cPCI) bus. (These technologies are 100% compatible with cPCI and fit into the existing 1101.10/.11 mechanical framework.) When using cPCI, the cPCI bus acts as the control plane on P1 and P2, with P3 and P5 acting as the dataplane. P4 is reserved for optional H.110 bus implementation for computer telephony. The cPCI bus width can be 32 or 64 bits. Also, one can forego the cPCI bus and use the area for custom signals.

Remember that the physical positions of the fabric, system, and node slots on a switched-fabric backplane are important. You have to decide which side (left, right, or middle) to place the fabric slot(s). The backplane manufacturer may also suggest this position, taking into account that for each position, the routing complexity is different. Placing the two fabric slots on the right (same) side of the backplane makes the routing much easier. Thus, fewer layers are needed, which reduces the design and material costs. But if the system has only one fan tray, a fan going out above the two fabric slots presents a single point of failure. So when employing this method, select redundant fan trays in the chassis.

Fabric slots, node slots, and system slots: Basically, a fabric card provides switching and/or routing functions to create a fabric between the node boards. For example, with a PICMG 2.16 cPSB backplane, the fabric slots (one or two) can support a standard fabric board (1 through 19 link ports), or an extended fabric board (20 through 24 link ports).

The node slots are the points where one can have PCI bridges, Ethernet cards, DSP cards, etc. In a standard cPSB (PICMG 2.16) application in a 19-in. rack, the maximum number of node slots is 20 for a single-fabric topology, and 19 for a dual-fabric topology. In an extended cPSB for ETSI racks of up to 24 in., the number of node slots can be 20 to 24. The links between the fabric slot and node slots can be 10, 100, or 1000 Mbits/s. For Star Fabric, each link is made up of four 622-Mbit/s low-voltage differential-signaling (LVDS) transmit and receive pairs. This translates to a 2.5-Gbit/s full-duplex bandwidth.

GigaBridge uses a PCI switch-fabric controller on a PCI bus segment and interoperates with other GigaBridge controllers as nodes on the switch fabric. Each node is linked on the dual counter-rotating rings via two dual 16-bit wide LDVS-based links operating at 400 MHz.

Development backplanes provide an excellent medium to continually modify and work out your design. Figure 2 shows the layout of a development backplane for PICMG 2.16, with various options for implementations of the cPSB links, cPCI bus, H.110 bus, etc.

Gigabit speeds are more difficult to come by, but by proper design, you can avoid high backplane-layer counts. An intelligent design ensures excellent signal integrity, shielding of the high-speed lines, and creative routing based on simulation study. When dealing with the high speeds and performance demands in switched fabrics, controlling impedance and minimizing crosstalk are important backplane design issues.

Some switched-fabric backplanes can be kept at 12 layers or less by using a controlled-impedance stripline design. In stripline design, the outside layers are ground for electromagnetic-interference (EMI) protection and suppression. The signal layers are alternated with power or ground layers, which also minimizes crosstalk. In some designs, the cPCI busing on P1, P2, and the H.110 bus on P4 can be routed in eight layers. Some fabric links are routed on the same eight layers, with the remainder routed on the other four layers.

The differential pairs are arranged with optimal spacing between the differential pairs and other types of signals. Optimal spacing is determined from simulation studies and the de-signer's experience in various routing applications. Crosstalk between different differential pairs and other types of signals (PCI, H.110) is practically zero. Plus, other signals don't hamper data transmitted via these differential pairs.

For applications using the existing cPCI format, switched-fabric interconnects provide excellent performance. However, the current connector limits the maximum speed. The standard cPCI 2-mm HM connector can't handle speeds above 1.5 Gbits/s very well.

High-speed connectors stand in contention for "next-gen" specifications—PICMG 3.0 for the PCI Industrial Computer Manufacturers Group (PICMG), and VITA 34 for the VMEbus International Trade Association (VITA). ERNI and Tyco are producing the ZD connector, FCI and ITT Cannon are producing the Metral connector, and Teradyne and Molex are producing the VHDM-HSD connector. All are based on a 100-Ù, differential-pair architecture and claim to handle up to 5-Gbit/s speeds.

Design and routing considerations: Proper signal organization on the backplane makes it much easier to achieve high performance. When dealing with custom bus routing (common on telecom systems), the physical orientation of signal assignments is important. Connector-pin to connector-pin wiring helps prevent the signals from crossing—something to avoid at all costs.

Many designers also choose to avoid vias. A feed-through from one layer to another can cause several problems. For example, if the trace width is 0.007 in. when the trace comes to a via, the diameter of the via hole could make the trace width 0.120 in. or so. In turn, the impedance, which was balanced at 65 Ù, suddenly shoots down to a much lower level. This causes reflections and signal ringing. If vias can't be avoided, save the direct routes for clocks and high-speed data signals, reserving the vias for steady-state signals.

LVDS, a common transmission method for switched fabrics, offers the benefits of higher speeds at low power, noise, and cost. Remember to keep the LVDS differential impedances the same. In general, the traces should be kept close together, and the same length, to minimize signal skew. If one signal arrives before the other, the phase difference between the voltages causes noise. Additionally, 90° right-angle turns cause impedance discontinuities.

Chassis design elements: With some special considerations, switched-fabric systems can use implementations from existing chassis developments. Although switched fabrics can be implemented in many areas, the initial key market is communications. Therefore, one must keep in mind the strict Network Equipment Building Systems (NEBS) compliance issues and considerations for HA and reliability.

Because most switched fabrics have implementation options in standard cPCI-based bus structures (some new specifications will be VME-based), existing packaging concepts can be used. This is a significant advantage, as similar packaging solutions in such areas as electromagnetic compatibility (EMC), cooling, and shock and vibration can be incorporated.

Using the IEEE 1011.10/.11 specifications makes EMI containment on the front and rear panel/cards easier to achieve. Also, shielding with EMC gaskets and BeCu "fingers," or contact strips, will help the chassis maintain electrical continuity between mating metallic surfaces, like panels, covers, and so on.

Further advances in shielding technology involve employing specially tooled EMI contact springs that are punched directly into the metal. This innovative design eliminates the need for costly gasketing, while improving system reliability and performance.

Conducted emissions can be ad-dressed by incorporating high-performance EMI line filters, enabling the chassis to meet FCC/CISPR requirements. For NEBS level 3 compliance, the chassis has to pass the test without front paneling, so the individual cards must be properly shielded.

The number of slots is an important consideration. For a scalable 17-slot system using 700 W or less, a 9U chassis can provide the ideal space-saving packaging within a 19-in. rack—and use 6U cards. This allows the maximum number of slots in a 19-in. rack, along with dual-redundant power supplies and front-to-rear cooling. The benefit of front-to-rear cooling is that it enables stacking chassis in a rack without affecting units above or below.

Figure 3 shows a prototype 19-in. rackmount EMC chassis with accommodations for a switched-fabric backplane for up to 21 slots, with rear I/O capability. It uses a push-pull airflow technique that employs three individually removable plug-in fan trays below the cards. These are 90-cfm tube-axial intake fans, and dual-radial blowers above the cards for exhaust. The intake air is filtered using Bellcore-compliant foam air filters that are easily removable.

Proper cooling is becoming an increasingly important system-design issue. The standards associations are moving toward CPU-centric systems with larger boards and more components generating heat. (Both VITA 34 and PICMG 3.0 are looking at 8U high cards, and PICMG 3.0 may go to 280-mm depths.)

The cooling requirements are best resolved through advanced airflow techniques—a slot air baffle, air plenums, etc. The slot air baffle lets airflow be directed at the individual slot level by tilting the "vanes" under each slot. This design has been successfully deployed in various applications.

With a push-pull airflow technique, a chassis can use tube axial intake fans and dual-redundant compact radial blowers for exhaust. Using compact radial blowers, or backwards-curved impellers, has proven very effective in dissipating heat buildup under high static pressure. To address the demanding cooling requirements, alternate versions of HA chassis will include positive pressure cooling (fans blowing on the cards), evacuative cooling (exhaust fans), or a combination of both. Again, front-to-rear cooling is necessary when chassis are stacked in a rack.

Fan monitoring alarms also are encouraged for mission-critical applications. Proper cooling ensures longer component life. As MTBF and MTTR are important issues in NEBS compliance, cooling is definitely critical in most switched-fabric applications.

The airflow needed is a function of the heat to be dissipated and the maximum permissible temperature rise through the system enclosure. It also depends on the coolant medium. If you're using a forced-air-cooled chassis, the following calculation applies:

Q = m ×CP × ΔT

where Q = heat to be dissipated in kilowatts, m = mass flow rate of the coolant medium, CP = specific heat of the medium, and ΔT = coolant temperature rise through the system.

When air is used as the coolant, the equation reduces to:

cfm = (1760 × kW)/ΔT(°C)

where cfm = airflow in cubic feet per minute.

For example, the total heat to be dissipated in the enclosure is 300 W (250 W through the card cage, plus the inefficiency of the power supplies, which is 50 W). The total temperature rise through the system is restricted to 10°C. Using the above equation, cfm = (1760 × 0.350)/10 = 62. Therefore, the minimum cooling system must provide at least 62 cfm at zero static pressure.

Most enclosures, though, will have certain static pressure buildup due to system resistance to airflow. This is influenced by factors such as restrictions to the air-intake opening, bends in the airflow path, and other obstructions in the airflow path. To determine the correct type and number of fans to use, an estimate of the static pressure in the enclosure must be arrived at. Based on this, the proper fans or blowers can be identified by studying their performance curves.

Incorporating critical features like hot swap and redundant operation will play a major role in meeting HA requirements. Therefore, the chassis must accommodate pluggable fan trays, power supplies, system management, and monitoring. The continued use of 3.3-, 5-, and 12-V circuits allows the choice of power-supply solutions from the multitude of proven vendors in the industry. However, some central-office applications will use 48 V dc directly to the backplane, negating the need for power supplies.

An important consideration in an HA system is hot-pluggability. The PICMG 2.11 Power Supply Interface specification defines a 47-pin Positronic connector for pluggable supplies. With gold-plated BeCu, C97 copper alloy, or similar material, these power contacts can absorb the arcing of the current that's plugged and unplugged under load. The power supplies should be based on a fully loaded, worst-case configuration, with 25% to 30% headroom, so the system isn't taxed when running at full capacity.

Power supplies can accommodate load sharing by using internal OR-ing diodes. In addition, they can balance the load between themselves using either third-wire or droop-current sharing methods. Current droop regulates within 10% to 15%, and third wire regulates to 5%. The power supplies and fans have failure indicators for easy diagnosis and replacement, which is another important tool for monitoring in HA systems.

System monitoring is increasingly important in switched-fabric systems. Critical voltages like +3.3 V, +5 V, and ±12 V, as well as fan speed/fan fail, temperature, and so on, need to be monitored. More-advanced remote-management systems include implementation of I2C, Intelligent Platform Management Interface, and others.

As voice, video, and data systems continue to converge, some exciting developments will come forward. For current applications, StarFabric, PICMG 2.16, and GigaBridge look to be leading the way. Down the road, VITA and PICMG will continue to offer interesting alternatives that are open to the industry.

TAGS: Components
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish