Electronic Design

Weapons Of Noise Detection

Noise is a perennial problem in electronic circuits, but the design challenges it presents change year to year. Here's a look at some new plans of attack.

The war on noise now takes us into the next frontier: the high-speed serial backplane. Actually, this is about both noise and jitter, which is a practical approach, if not rigorously correct. Tektronix's Pavel Zivny likes to limit noise to "all things undesirable in the vertical (voltage, or power for optical signaling) direction," distinguishing it from jitter, "which is all things undesirable in the ‘horizontal,' i.e., time, direction." The problem is that it's difficult to separate the two, since they combine to affect bit error rate (BER) performance.

Most serial-data measurements are made, or at least presented for analysis, as eye patterns (Fig. 1). Engineers started using eye patterns qualitatively when storage-tube analog oscilloscopes became available. They would look at two transitions on a repeating bit pattern, and the overlaid traces would smear out if the data was jittery.

Early digital storage oscilloscopes (DSOs) with color displays, but relatively shallow memories, added a third qualitative dimension by showing the relative distribution of jitter events through the use of color. Today's really fast DSOs, with memories that can store long data sequences and DSP engines capable of data analysis, have made eye-pattern measurements quantitative.

They also simplify setup and provide for standard pattern masks that enable patterns to be used for go/no-go testing (see "Eye Pattern Sample Size and Clocking,").

To see why noise in the broader sense matters in serial data transmission, consider several eye diagrams. Figure 2a is an eye diagram of a good signal. The receiver will have an easy time detecting transitions. In Figure 2b, the voltages in the eye are settled, so that's not a problem, but the timing of the edges is very jittery. In Figure 2c, the edge timing is good, but the signal has a bad case of vertical "hum," possibly from a power supply.

Figure 2d depicts a signal with both noise and jitter problems. The receiver comparator will have to compare with low noise to navigate the narrow voltage range between the high and low levels. Also, the timing margin for the circuit latching the information will be diminished because of the way transitions move back and forth.

The real goal of these measurements is to assess the effect of various jitter and noise components of BER (Fig. 3). Jitter has many components. For noise, the main breakdown is between random and deterministic noise.

Random noise is what we commonly call noise or RMS noise when we look at a signal with an oscilloscope or a spectrum analyzer. Deterministic noise is split into periodic noise—noise with a clear spectral distribution that's uncorrelated (unrelated) to the signaling bit rate. For example, crosstalk from the power supply will appear as periodic noise. So will the processor clock, assuming your serial data isn't running on the processor clock.

Data-dependent noise (DDN) captures vertical impairments caused at the bit rate in a way that depends on the bit pattern—for instance, a "lonely-low pattern" (zeros surrounded by many ones on both sides). DDN often is caused by ISI (intersymbol interference), that is, by a physical mechanism that couples energy from one bit into adjacent bits due to inductive or capacitive coupling, losses, or transmission line effects.

According to LeCroy's Mike Hertz, since the complete data record is available in an instrument's memory, the location of individual bits can be determined by comparing each bit interval in the original waveform with a pre-loaded mask. When mask testing is turned on, the entire waveform is scanned bit by bit and compared to the mask.

Upon detecting a mask hit, the bit number is stored, and a table of bit values is generated. This table gets numbered, starting with the first bit in the waveform. It can be used to index back to the original waveform to display the waveform of the failed bit. Certain eye-pattern measurements are specified as required tests for many standards. The basic eye measurements deal with amplitude and timing.

Eye amplitude is the difference between the simple mean of the distribution around the zero level and the mean of the distribution around the one level. Eye-amplitude measurements are formed by distributing amplitude values in a region near the center of the eye— normally 20% of the distance between the zero crossing times.

Eye height is a signal-to-noise measurement. It's very similar to eye amplitude, except the standard deviation of both the one and zero levels is subtracted from the eye amplitude.

Eye width indicates the total jitter in the signal. The time between the crossing points is computed based on the mean of the histograms at the two zero crossings in the signal, and the standard deviation of each distribution is subtracted from the difference between the two means.

Optical signals on fiber require an alternate eye-pattern measurement called the extinction ratio. It's necessary because laser transmitters aren't fully shut off during data transmission. Not surprisingly, extinction ratio is the ratio of the optical power with the laser in the on state to that of the laser in the off state. It's a little trickier to make laser power measurements than voltage measurements because it involves the use of optical to electrical converters in front of the measurement device.

Turning to timing measurement, eye crossing is the point where transitions from zero to one and from one to zero reach the same amplitude. It's expressed as a percentage of eye amplitude. A measurement instrument looks at horizontal slices across the eye diagram and picks out the slice with the minimum histogram width.

If you take vertical slices instead of horizontal, you can measure average power, the mean value across the entire data stream. Unlike the eye-amplitude measurement that separates the ones and zeros histograms, average power is the mean of both histograms. If the data encoding is working as it should, average power should be 50% of the overall eye amplitude.

The essential element for BER calculations is time interval error (TIE), the difference between data edges and edges of the recovered clock. Measuring the TIE histogram lets you determine the likelihood of a jitter value exceeding a given maximum.

To obtain BER, the data sample's TIEs are presented as a histogram of TIE value versus the number of occurrences of that value. The objective is to determine the probability that a data transition occurs simultaneously with the sampling of data. The histogram yields the conditional probability of a data edge occurring at a given time within a bit period, given that the data is sampled at that time. A bathtub curve shows this relationship graphically (Fig. 4).

There's a catch here. Systems typically specify bit error rates in the 10–12 range. It takes a lot of edges to measure events with probabilities down to one in 10–12—too many to acquire and store on a contemporary instrument. That necessitates extrapolation of the histogram from a smaller set of measurements.

TAGS: Components
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.