Electronic Design
Upgrade To 100G DP-QPSK Core Networks With The Aid Of Analog

Upgrade To 100G DP-QPSK Core Networks With The Aid Of Analog

Think network traffic can’t get any heavier? Though climbing at a rate of more than 50% annually over the past several years, a data deluge looms on the horizon thanks to the emergence of video-on-demand, fiber-to-the-home, 4G wireless, and cloud computing. Carriers around the world are desperate for technologies that enable upgrades from current 10- and 40-Gbit/s to 100-Gbit/s backbone networks.

The only practical way to achieve the desired leap in core network capacity is to increase optical data-channel bit rates from 10 or 40 Gbits/s to 100 Gbits/s, a technically challenging task indeed. That’s because optical fiber non-idealities such as polarization mode dispersion (PMD) and chromatic dispersion (CD) limit the practical data rate of traditional amplitude-modulated signaling to 10 Gbits/s for core network links.

Consequently, advanced phase-modulation techniques such as dual-polarization quadrature-phase-shift keying (DP-QPSK) and novel coherent receivers are required to meet the speed objectives of next-generation networks. This demands creative design techniques and innovative analog solutions for 100-Gbit/s optical transmitters and receivers.

Dual Polarization And Coherent Detection

DP-QPSK holds the key to achieving 100 Gbits/s over the installed base of fiber (and its inherent limitations). Quadrature-phase-shift keying (QPSK) is a communication technique that encodes two bits of data into every transmitted symbol. Dual polarization (DP) doubles the capacity of an optical channel by transmitting data on two orthogonal polarizations, an X and a Y polarization, of the same channel.

DP combined with QPSK achieves a fourfold capacity improvement relative to standard, single polarization, amplitude-shift-keyed systems. Thus, DP-QPSK transmission minimizes the effects of fiber impairments that would limit 100-Gbit/s signaling.

In a 100-Gbit/s DP-QPSK transmitter and coherent receiver, modulator drivers are critical analog components for long-distance optical transmission systems (Fig. 1). Drivers primarily serve to encode the electrical signal onto a continuous-wave (CW) laser via an optical modulator and operate at the interface of the multiplexer data source and optical modulator.

The standard modulator for 100G DP-QPSK consists of three sub-modulators, each of which must be properly biased to generate the QPSK signal. Four single-ended drivers provide the large drive requirement to support the required 2Vπ (i.e., 7.0 V p-p) swing to the modulator.

Through careful design and with proper understanding of the interface between the multiplexer and optical modulator, modulator drivers can be optimized to compensate for frequency roll-off of the modulator or multiplexer components. This allows the transmit chain to maintain a high bandwidth for excellent signal integrity. Modulator driver components have already been field-proven for 40G DQ-PSK (2 × 21.5 Gbaud) systems. Therefore, the primary challenge for 100G development is to tailor the modulator driver characteristics to support higher-speed signaling.

A coherent receiver (discussed in detail later), used on the receiving end of the optical fiber link, operates by mixing the incoming data signal with a local oscillator (Fig. 1, again). Mixing the high-frequency optical data signal with another optical signal (i.e., local oscillator) will downconvert the data to a frequency that can be processed by next-generation, high-speed analog devices.

By combining a coherent receiver with a fast analog-to-digital converter (ADC) and digital signal processor (DSP), the coherent system offers the unique ability to equalize the optical channel and correct for PMD and CD present in the long-distance high-speed link. As a result, it’s possible to recover the distorted 100-Gbit/s signal.

SMT Challenges At 28G/32G

The raw symbol rate for a 100-Gbit/s DP-QPSK signal is 25 Gbaud, but protocol overhead brings the baud rate to 28 or 32 Gbaud depending on the system design. Traditionally, electronic components that support speeds up to 28 Gbits/s are designed in large and expensive metal packages with microwave connectors that directly connect to interfacing components.

The widespread deployment of 100G in backbone networks will require innovative solutions based on surface-mount (SMT) packaging and standard printed-circuit-board (PCB) interconnects for reduced system size and cost. Routing data signals over PCB material and interfacing with low-cost SMT packaging presents many practical design challenges.

FR4 laminate is the PCB material of choice for many of today’s high-speed analog, digital, and radio-frequency (RF) applications due to its excellent mechanical and thermal properties and its low cost. However, just as PMD and CD limit the speeds of optical fiber links, the lossy properties of FR4 material limits the speeds of electrical interconnections on PCB traces.

Specifically, FR4’s relatively high dielectric losses limit edge rates and induce pattern-dependent jitter at 28 and 32 Gbits/s, which degrades system performance. Beyond that, the construction of FR4 substrates, made of a woven fiberglass cloth with an epoxy resin binder, is inherently non-uniform, creating possible imbalances in differential transmission lines. That’s why electrical interconnect in high-speed 100G systems requires hybrid PCBs with FR4 as base material and uniform, low-lossy materials like Rogers 3003 ceramic-filled TFPE or Panasonic’s Megatron 6 PPE blend resin.

Telecom systems, including 100G DP-QPSK systems, typically require operating lifetimes of 15 years or more. As such, low-cost, high-reliability SMT packaging—already used in 40-Gbit/s telecom applications—is now being developed to support 100G systems.

Beyond thermal management and second-level joint reliability, the most challenging aspect of high-speed package design is optimizing the transition from SMT package to PCB to minimize reflections, and therefore jitter, at the high-speed input and output interfaces. A good design must also minimize performance sensitivity to manufacturing and assembly tolerances. Tools such as HFSS have proven useful in such design.

To illustrate, a simulated versus measured comparison of of the I/O package-EVB transition interface of a 100G modulator driver package reveals that packages with 3-dB bandwidth higher than 40 GHz are feasible and that simulation tools match well with measured results.

Reduce Power With A Bias-T

A DP-QPSK system incorporates four modulator drivers, each delivering output voltages as high as 7.0 V p-p. An example is Inphi’s IN3212SZ Mach-Zehnder modulator driver (Fig. 3). Due to the large drive requirement, modulator drivers can dissipate substantial power, which is scaled by four times for 100-Gbit/s DP-QPSK transmission systems. Thus, reducing the power dissipation of modulator-driver components becomes critical since it eases thermal management and shrinks operating costs.

In this case, a practical power-reduction method is to include an external bias-T at the output of each modulator driver (Fig. 4). The inductor in the bias-T effectively doubles the maximum output voltage swing that can be delivered by the driver. Consequently, a driver with bias-T is able to produce the same output swing (as a driver without a bias-T), but can operate off of a much lower power supply, saving considerable power.

While it may seem straightforward to include a bias-T at the output of a modulator driver, practical challenges arise in terms of implementation. Specifically, the frequency response of the bias-T interacts with the driver’s output stage and shapes the overall frequency response, which generally affects output signal quality. Thus, placement of the bias-T along the output transmission line is critical.

Furthermore, most commercially available bias-Ts aren’t broadband enough to provide the desired frequency response for 28- or 32-Gbit/s applications. For that reason, it’s common to use two or more bias-T coils in series to ensure a good broadband response. For instance, one coil may be used to present an open circuit at low frequencies, while the other coil handles high-frequency content. In this regard, care must be taken when incorporating a bias-T into a 100-Gbit/s DP-QPSK transmission system.

Coherent Receiver Up Close

A coherent receiver consists of an optical hybrid mixer that combines the incoming data signal with a local oscillator (LO) (Fig. 1, again). A set of photodiodes mixes the combined optical signal into a downconverted electrical current while the transimpedance amplifier (TIA) amplifies the electrical current for processing by the ADC/DSP.

The LO laser, as well as the continuous-wave (CW) laser on the transmit side, must have very low noise for the coherent receiver to correctly demodulate the incoming signal. Also, the offset frequency of the LO laser, relative to the transmit laser, must be maintained below several gigahertz.

TIAs used in coherent receivers must meet several key requirements: high bandwidth, excellent linearity, wide input dynamic range, and the ability to deliver the optimum output amplitude to the interfacing ADC. Because many of these parameters directly trade off with one another, coherent TIA design is particularly demanding.

It can be shown that the average dc electrical current going into the TIA is proportional to the LO power level, while the peak ac electrical current is proportional to the signal power and the local oscillator power:

IAVG ~ PLO

IPK ~ √(PSIGPLO)

Because the signal power into the coherent receiver may vary by as much as 18 dBm or more, depending on link distance and system architecture, a coherent TIA must be able to accommodate a wide input-current dynamic range. Generally, a TIA’s minimum input current is dictated by its input-referred noise current. For a coherent TIA, the maximum input current is determined by the linearity requirement of the ADC/DSP that follows the receiver.

Typically, higher-gain TIAs achieve better noise, while lower-gain TIAs achieve better linearity. Thus, a variable gain TIA is required to accommodate the wide input dynamic range of coherent receivers. Because noise is lowest at high gain and total harmonic distortion (THD) is lowest at low gain, a mid-band gain setting often provides the best signal-to-noise-plus-distortion-ratio.

Beyond having to accommodate a wide range of input signal levels, the TIA must deliver the optimal output amplitude to the ADC that follows it. ADCs generally perform best when operated at their full-scale range, which is the maximum analog amplitude that the ADC can digitize.

Because TIA linearity tends to degrade as output amplitude increases, the TIA must achieve excellent linearity at the highest output amplitude required. In practice, the TIA operates an output amplitude lower than the full-scale range of the ADC, which allows amplitude transients on the optical data signal to be captured and processed by the receiver.

The TIA is a challenging component to design since it must support a wide range of requirements at its input and its output. Because many of these requirements trade off against one another, it’s difficult for a single TIA to support the needs of the entire 100G ecosystem. However, one TIA, the Inphi 2850TA, currently in the commercial stage, is being used in a 100G coherent system that’s deployed in field. Figure 5 illustrates the recovered constellation and data outputs of that TIA used in a 100G coherent link.

TAGS: Digital ICs
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish