Advance Transceiver Designs To Meet 2.5-Gbit/s Data Rates

Dec. 14, 1998
Overcoming The Obstacles Posed By High Frequencies Requires Astute Design Decisions At Both The System And Silicon Level.

Over the past decade, the escalating need for faster communications speeds have driven network infrastructures well beyond the limitations of traditional copper media. In addition, they have spurred a migration to the inherently higher bandwidths achieved through fiber optics. Optical links, running at speeds from 622 Mbits/s to 2.5 Gbits/s, have already emerged as the preferred media for campus backbone LANs, storage area networks (SANs), metropolitan area networks (MANs), and wide area networks (WANs). However, from the designers' standpoint, making the leap from 10/100BaseT speeds to 2.5-Gbit/s data rates presents a host of new design challenges.

Many of the tried-and-true basic digital design assumptions that provided ample extra margins and headroom at 10/100BaseT speeds must now be reconsidered in light of the constraints imposed by multi-gigabit data rates. Designing robust transceiver systems that can reliably deal with such high frequencies at the board-edge connector requires pushing the on-board circuitry into a whole new realm, where even relatively short traces can exhibit characteristics of analog transmission lines rather than crisp digital waveforms.

In addition, the higher frequencies show much greater susceptibility to the effects of transient noise on the board and/or to jitter in the data line. Ground and power bus isolation becomes a paramount consideration, along with careful power-supply selection criteria. And, the on-board presence of potentially noisy parallel data buses provides more layout challenges for the board designer.

Clearing The Hurdles Successfully overcoming all of these new obstacles requires astute design decisions at both the system level and the silicon level. Not only do system engineers need to use optimal board design and layout rules to minimize noise, jitter, and interference, but they also need to leverage new semiconductor-integration options to maximize available margins and headroom.

A critical first step in effective system design is the selection of transceiver components that match your architectural design requirements. Options in the newest generation of transceiver silicon include such features as internal versus external clock recovery, built-in parity checking, and diagnostic loopback capabilities. Package size, power requirements, and component cost can vary significantly depending on the transceiver's feature set and performance capabilities, so prudent selection of transceiver silicon is required.

For example, let's look at a block diagram for a SONET STS-48/STM-16 transceiver application designed to provide a fully integrated 2.488-Gbit/s PMD layer (Fig. 1). The Sumitomo fiber-optic receiver and fiber-optic transmitter components are paired respectively with integrated demultiplexer and multiplexer devices, which provide the deserialization and serialization functions to convert between high-speed bit-serial and byte-serial data. In turn, the 311-MHz byte-serial data streams to and from these components are interfaced (via an integrated multiplexer/demultiplexer) to a bank of four PMC-Sierra PM5355 devices, each handling a 77-MHz data stream.

As will be discussed in more detail , the 2.5-GHz data and clock lines between the board-edge fiber-optic components and the multiplexer/demultiplexer chips represent critical pc-board layout challenges. In addition, the multiple 311-MHz and 77-MHz data streams can present significant noise possibilities.

Integration Benefits Silicon-level integration of transceiver components offers the immediate benefits of sharing the cost of packaging and common reference and threshold generators, as well as opening the door to simplifying the design of complex multi-channel boards. At the silicon-level, having the transmitter multiplexer, receiver demultiplexer, and clock recovery all in the same chip set allows for on-chip implementation of closely-coupled loop-timing structures.

For instance, by wrapping the receive timing back around on the transmitter, a channel can essentially be made to look like a complete low-cost terminal to the system on the other side of the transmission link. And by migrating much of the channel-switching functionality down onto a four- to eight-channel transceiver board, designers can better leverage new high-speed system-level switching fabrics, such as serial backplane architectures, that yield improved overall throughput and lower cost per channel.

Not only does sharing the receive clock with the transmitter greatly simplify the clock and timing distribution within the system, it also allows for simple on-chip implementation of repeater timing. The matching of chip-level timing with the network's overall clock synchronization can be especially important as higher-level optical network topologies migrate toward wavelength-switched capabilities (such as wavelength division multiplexing).

From an architectural standpoint, it's beneficial to conduct chip-level switching of separate wavelength data transmissions while staying completely within the overall network's time domain. Using the same bit-synchronous timing at the chip level also enables cost-effective implementation of integrated performance monitoring on-the-fly at the repeater level.

Of course, packing all of this additional functionality onto a multichannel, board-level, transceiver module at 2.5-Gbit/s speeds pushes noise and jitter management to the forefront of design challenges. Because the requirements of Bellcore, ANSI, and ITU specifications have to be met "at the connector," designers must build in appropriate margins at every point where noise and/or jitter may contribute to the overall problem. In essence, the designer has to allow for the nonideal behavior of the electro-optics, equalizers, and other factors involved in getting the signal from the off-board media to and from the serializer/deserializer circuitry in the transceiver.

As transceiver board designs move up above OC-12 (622 Mbits/s) and on to OC-48 (2.5 Gbits/s), one key problem in controlling noise and jitter revolves around transmission-line challenges that were negligible at lower frequencies. In an OC-48 design, both the tolerance for input jitter and the acceptable jitter-transfer ratio drop off significantly as the modulation frequency increases (Fig. 2).

At multi-gigabit speeds, maintaining acceptable circuit-routing and board-layout practices become even more stringent constraints for controlling jitter on the input circuits. In Figure 1, for example, the 2.5-GHz lines between the fiber optics and the multiplexer/demultiplexer chips will by default take on all the characteristics of a transmission line for any connection longer than 2.5 cm.

In addition to minimizing the length of these lines, particular attention also must be paid to terminations, stubs, corners on circuit lines, and balancing differential lines to make sure they have equal electrical lengths. Termination resistors should be as close to the end-point of the line as possible. In the Figure 1 reference design, all of the 2.5-GHz circuits are equal length 50-Ω transmission lines, terminated directly at 50-Ω resistors that are embedded in the S3045 device. Other key issues in layout are to avoid any 90* turns in the high-speed lines and to always use adequate decoupling.

Another key factor to keep in mind when modeling and matching the I/O circuits between the fiber optic and serialization/deserialization components is, at 2.5-GHz levels, every portion of the circuit can have a significant impact on the final jitter-tolerance of the overall link. For example, the circuit chain between the 3041 transmitter die and the laser driver includes a series of intermediate links between the transmitter's die, bond wire, pad, package, the pc-board transmission line, and then the package and bond wire on the laser driver side (Fig. 3). To effectively characterize this overall transmitter-to-laser-driver circuit at 2.5-Ghz speeds, every one of these intermediate links would have to be included in the Spice model.

The designer also must be extremely careful about the integrity of ground planes beneath the signals to avoid undesirable cross-coupling. As the frequency gets up beyond 1-GHz levels, the selection of board materials also becomes critical. Less expensive fiber-reinforced glass (FR-4) boards may actually start to become dissipative at these frequencies, requiring a transition to alternatives such as Teflon-content boards if longer circuit traces are used in the layout.

Generally, designers can live with lower-cost FR-4 if they keep high-speed runs very short, about two inches or less. Here again, the trade-offs in choice of transceiver features and packaging size can play an important role. Keep the transceivers small so that they can be moved closer to the board-edge fiber-optic components, thereby minimizing transmission-line lengths. Transceiver size becomes especially critical in multichannel designs, where both card-edge spacing and overall board real estate are at a premium.

In addition to substrate dissipation issues, longer on-board transmission lines can run into problems with both skin losses from the copper itself and attenuation losses as a result of using only the outer portions of the conductor for signal propagation. At frequencies around 1 GHz, the combined losses from all these sources can be empirically measured as inter-symbol interference or blurring of the ideal bit-edge.

For instance, tests conducted by AMCC have demonstrated that launching a 1-GHz signal with a clean 100% open eye-diagram across a one-foot-long 50-Ω transmission line on an FR-4 substrate yields only a 90% open eye-diagram at the receiving end. Attenuation losses across any media typically roll-off at a rate equivalent to the square root of the frequency. But when combined with skin-losses and dielectric dissipative losses, the total signal loss begins to drop directly with the frequency increase for frequencies above 1.5 GHz.

Radiated losses also can easily occur at these high frequencies unless return paths and ground planes are carefully maintained in a very clean board layout. If the high-speed trace begins to act as an antenna, it obviously results in two major problems—loss of adequate signal at the destination optic module, and injection of unwanted noise into the rest of the system (such as EMI and/or cross-talk between channels that increases jitter).

Good board design practices include making sure that high-speed traces don't have to jump between different board layers. In some cases, for a trace that has to be longer than an inch or two, it may even be useful to bring it down a layer and make it a microstrip embedded within the board. Essentially, a microstrip provides a separate strip connector that can be dielectrically isolated from the board's primary ground plane (Fig. 4). If the thickness, width, and height of the line above the ground plane are carefully controlled, the microstrip will exhibit a consistent characteristic impedance.

New-generation integrated transceiver chip sets also help with the maintenance of noise and jitter in two primary ways. First, by reducing the size of the overall package, they allow board designers to pack more functionality into a smaller amount of board real estate while simultaneously reducing trace lengths. Secondly, the integration of both the transmit and receive functions into a single chip set pulls into silicon many of the traces that would otherwise have to be implemented on the pc board.

For example, consider that all of the diagnostic loopback and line loopback circuits are included on-chip, thereby avoiding the need for the board designer to manage a series of inch-long board traces that would have a high potential for radiated losses. Pulling such circuitry into the chip and eliminating the requirement for driving several high-speed board traces also yields a significant power savings, greatly simplifying the overall challenge of power and ground management.

Bus Isolation Challenges Maintenance of a low-noise environment relies heavily on the choice of the multilayer board structure and the effective placement of ground planes, along with the type and location of the power supply. While modern switching power supplies can be relatively cheap in terms of power efficiency, in a high-speed 2.5-Gbit/s communications board, they can turn out to be quite expensive in terms of the noise budget. Selecting a power supply rated for 95% efficiency and a seemingly reasonable 200 mV of peak-to-peak noise could create real problems when attempting clock recovery on signals that have only one volt of amplitude outside the chip and amplitudes as small as 0.25 V inside the chip.

Therefore, good power-supply decoupling is required in at least two areas. First, whatever noise the power supply is intrinsically generating must be filtered and smoothed by distancing it at the far end of the board from the critical high-speed analog-like traces. Then the use of distributed decoupling capacitors can effectively average out the noise, thereby getting down to the 20- to 50-mV levels typically required at the PLL power supply pins. Using good low-impedance filter capacitors at the devices themselves, with a direct low-inductance path to the pins, is critical to managing power line noise. And keeping the decoupling capacitors on the same side of the board as the component helps to minimize unwanted inductance due to vias.

Parallel Bus Interface Another potential noise generator is the inevitable on-board presence of multiple chips required for functions like framing, segmentation, and re-assembly of data. These are usually CMOS devices that have large single-ended parallel buses operating at various frequencies. For example, the feeds to a large 32-bit, 155-MHz device could easily fall down within the loop bandwidth of the clock-recovery or transmit PLLs, thereby causing significant interference problems. Such a possibility becomes especially likely if the switching bus on the CMOS device is processing long patterns of 00-to-FF rollover counts, which are often used in system testing.

The intricacies of parallel bus interface management further underscore the need for good distributed decoupling throughout the board design, not just between the transceivers and the power supply. Referring again to Figure 1, the series of 8-bit 77-MHz interfaces between the four PMC-Sierra devices and the S3045 device would need to be carefully routed to maximize timing margin and minimize coupling risk. Line spacing between all signals of different origin should be at least three to four line widths to reduce the potential for coupling and interference. In addition, the appropriate use of series-damping resistors is needed to avoid ringing from the CMOS output transistors, which can accumulate into noise problems throughout the system (Fig. 5).

Margins And Headroom As demonstrated above, next-generation OC-12 through OC-48 multi-channel transceiver systems will rely heavily upon semiconductor-level integration as a key for achieving required performance, simplifying overall board-level design issues, and managing the noise and jitter issues that emerge at higher speeds. In addition to leveraging the further refinement of existing bipolar and CMOS processes, next generation transceiver designs also will benefit from new high-speed and high-integration processes, such as Silicon Germanium (SiGe), which will provide more performance headroom and low-power capabilities.

With almost every component on the transceiver board design (except for the optical module) now available in a 3.3-V configuration, the goals of cost-reduction and noise management are helped by using only one power source and the elimination of multiple requirements for on-board power conversion.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!