The late 1990s and early 2000s saw an unprecedented rise in demand for Internet Protocol (IP) and Ethernet transport capacity in the wide-area network (WAN). As a result, carriers were required to upgrade their 2.488-Gbit/s SONET/SDH systems, both with higher per-wavelength data rates and much denser packing of wavelengths onto individual fibers. The resultant increase in noise bandwidth and inter-channel interference made forward error correction (FEC) a necessity in 10-Gbit/s dense wavelength division multiplexing (DWDM) systems.
The ITU-T agreed to standardize an interoperable hard-decision FEC at 2.5 Gbits/s and 10 Gbits/s based on the RS(255,239) code in G.975. This was captured in G.709, the fundamental format specification for the optical transport network (OTN). However, system vendors quickly discovered that a stronger FEC was one of the least expensive ways of adding margin to their 10-Gbit/s systems. Consequently, a variety of “second generation” codes with the same 6.7% overhead as the G.709 standard FEC were developed, including turbo product codes and low-density parity check (LDPC).
The most effective were iterative-decode block codes that used fairly powerful component codes (BCH and RS codes of order m=9 to m=11) and just two to three iterations. These codes had net gains about 2 dB better than the standard G.709. The 8-dB coding gain of these second-generation 6.7% FECs has become an integral part of the link budgets of many deployed networks. An 8-dB FEC has become a requirement for any 10G equipment needing to communicate over these networks.
More recently, the explosion of bandwidth demand from cloud-based applications and IP television (IPTV) and video on demand (VoD) services has driven operators to upgrade their 10-Gbit/s-based metro and regional DWDM networks with 40-Gbit/s and 100-Gbit/s links. Operating these links over the existing infrastructure requires both higher symbol rates and complex modulation schemes, both of which increase the system’s sensitivity to optical impairments.
For example, a fourfold increase in the signal bandwidth (from 10G to 40G) requires a receive filter that is four times as wide and thus lets in four times as much noise. This causes a 6-dB degradation in signal-to-noise ratio (SNR). Because each dB reduction in SNR costs approximately 25% of reach in an amplified system, this limits a system capable of 400 km reach to only 72 km. A stronger FEC is the most economical way of regaining some of the link budget, to the degree possible within the Shannon limit. Any remaining gain must be recovered using optical techniques.
One attempt to create a stronger FEC, Swizzle, is a “third-generation” hard-decision FEC designed to address the need of 40-Gbit/s and 100-Gbit/s optical transport networks. It’s intended to reside in 40G or 100G OTN framer devices and is best enabled for book-ended intra-domain interfaces that require gains greater than that offered by the G.709 standard FEC (Fig. 1).
Operating at the standard with the standard 6.7% overhead rate, Swizzle provides 9.45 dB of net coding gain at an output bit error rate (BER) of 1E-15. Because this high gain comes integrated into the framer, and requires no increase in overhead rate, Swizzle is ideal for cost-sensitive metro applications. For very long-haul systems requiring additional gain, the Swizzle FEC can be disabled and a module with a soft-decision FEC using up to 20% redundancy can be plugged in.
Two-dimensional concatenated block codes are widely used at 10G. Figure 2 shows a generic 2D code in which every bit is covered by two block codes. Decoding proceeds by correcting each row, then each column, and iterating. This is a good strategy, but it is possible to do much better.
The Swizzle FEC does three things better than existing 2D codes. First, it uses a better interleaving structure that improves performance and decreases latency. Second, it allows a more parallel implementation, which increases performance for the same latency. And third, it replaces the simple “last decoded codeword wins” procedure with a maximum-likelihood decode procedure that is extremely resistant to false decode.
Swizzle FEC Code Structure
In a 2D code, each row is covered by all the columns and each column is covered by all the rows. A key choice in the design of such a code is the number of bits shared by each row/column pair. If each row/column shares many bits and a cluster of errors occurs in this shared group, no amount of iterative decoding will help. These killer patterns, also known as trapping sets, cause error floors that sharply degrade the performance of such codes (Fig. 3).
The other option is to make the rows/columns orthogonal so they share only one or a few bits. This approach, however, has very high latency, which can only be countered by using quite small, weak component codes and by sharply limiting the number of iterations.
The Swizzle FEC design takes a different approach, inspired by LDPC, where the codewords interlace in a spiral pattern so each codeword is covered by almost all the others nearby (Fig. 4). This approach eliminates the error floor, with less than half the latency of an orthogonal 2D design.
In the Swizzle FEC code, the maximum overlap of each pair of codewords is only 2 bits. Each Bose-Chaudhuri-Hocquenghem (BCH) component codeword can correct more than twice that many bits. This helps make trapping sets rare enough so they don’t materially affect the performance of the code.
Encoding & Decoding Procedures
The encoding of a Swizzle FEC frame is conventional (Fig. 5). As G.709 frames are passed to the encoder, the bits are “swizzled” and parity is calculated on the resulting codewords. The parity is then inserted into the FEC positions within the G.709 frame. The original information bits are streamed through with minimal added latency and are not altered in any way.
In a 2D decode structure, first the rows must be decoded and corrected, then the columns, and so on (Fig. 6). This serial approach creates substantial latency. It also requires about twice as much RAM as the FEC block size (or constraint length). This latency and RAM would be better used for additional decoding iterations.
The Swizzle FEC decodes all of the interlaced codewords as soon as it receives them. It then uses a reconciliation process to determine which corrections to apply. This more highly parallelized approach saves half the latency of the standard 2D decode, which in turn allows twice the number of iterations. In PMC-Sierra’s implementation, which uses layout-optimized circuitry and an intelligent scheduler to allocate decode resources, the improvement is much more than twice as many iterations.
Just as important, however, looking at the codeword decode results as a whole allows us to make much better decisions about which bits to flip. Each bit is covered by two codewords. When those codewords agree about the value the bit should have, they are almost always right.
However, when one codeword is a false decode, there is almost always a disagreement between the two covering codewords. We can use various heuristics to determine which one is a true decode and which one is false. For example, a codeword that corrects the maximum number of bits (T bits) is much more likely to be false than one correcting T-1 bits or less.
This combination—very tight interleaving, a high number of iterations, and the soft-decode-like maximum-likelihood reconciliation—provides very high decoding performance, closely approaching the Shannon Bound for hard-decision FECs.
The 6.7% Swizzle FEC can correct up to a random BER of 4.8E-3, corresponding to 9.45 dB of net effective coding gain. It can correct up to 2048 consecutive errors at 40G and up to 6000 consecutive errors at 100G. Figure 7 and Figure 8 show the BER performance for the Swizzle FEC represents measurements taken in hardware using PMC-Sierra’s FPGA implementations.
Performance Test Procedures
All the performance tests were performed using PMC-Sierra’s FEC Performance FPGA Demonstrator Board, which includes standard O.150 pseudorandom binary sequence (PRBS) generation/detection and standard OTN framing.
A sophisticated BER generator is implemented in FPGA. Built using a blend of carefully chosen pseudorandom and true-random elements, this generator has excellent spectral characteristics and an effectively infinite repetition period. By using a strictly digital generator, absolute stability over variations in process, voltage, and temperature are ensured.
In addition to random BERs, the generator also offers the ability to test burst correction and corrected zeroes/ones statistics. The entire setup is controlled over USB from a host laptop that collects and collates data from the board, allowing the setup to run continuously for multiple weeks. PMC-Sierra’s results have also been confirmed with independent error generators in our customers’ labs.
In addition to having a higher maximum gain, the Swizzle FEC offers substantially lower latency than comparable codes. Furthermore, the latency/gain tradeoff of a Swizzle FEC decoder is configurable during connection setup, allowing each link in a network to add only the latency actually required. The gain/latency tradeoff is strictly a function of the amount of decode processing that is done. There is no effect on the encoder or the format on the line.
Since Swizzle is a hard-decision FEC, it can easily be implemented in CMOS and integrated with an OTN framer to deliver a cost-optimized and power-efficient solution to address the needs of a high-volume metro market. This additional gain can be used to extend reach, operate over lower-quality fiber, and correct for nonlinear impairments that constrain maximum wavelength density.