Monolithic Analog-to-Digital Converters (ADCs) have long been a hotbed of activity. And now the speed demons among these ICs are really coming into their own, pushing the performance envelope and expanding their domain into new applications. High-speed ADCs manufactured in both high-end and mainstream processes are boosting conversion rates across the full range of ADC accuracy.
The fast ADC's ability to digitize higher signal rates pushes digital signal processing closer to the beginning of the signal chain, allowing system designers to reap the benefits of higher integration associated with digital design.
Beyond pumping up sampling speeds, semiconductor vendors are focusing on application-specific requirements, which call for higher levels of integration, reduced power consumption, and a migration path to higher levels of precision. Still another trend is the deployment of low-voltage differential signaling (LVDS), for either a parallel or serial interface, as an option for the digital outputs. Chip manufacturers are exploiting LVDS' ability to maintain high dynamic performance at high conversion rates, to lower EMI, to reduce pin counts and package sizes, and to allow more flexible partitioning of system designs.
Just a few years ago, the "high-speed" label applied to any ADC specifying a conversion rate of 1 Msample/s or greater. Today, 10-Msample/s (MSPS) performance would more likely be considered a minimum for high-speed devices. That definition encompasses a host of ADCs, ranging from 8-bit flash ADCs with sample rates in the gigahertz range to 14-bit pipeline ADCs with performance at or near 100 MSPS.
Within this spectrum, performance also varies with the semiconductor process. In the past, the highest sample or conversion rates were achieved with converter chips fabricated in bipolar, biCMOS, or silicon-germanium processes. Then, about 18 to 24 months later, similar performance would be reached in CMOS.
In general, transitioning to CMOS reduced power consumption dramatically while only requiring modest tradeoffs in some specifications, such as signal-to-noise ratio (SNR). As a result, the CMOS chips represented mainstream performance, while biCMOS, bipolar, and other high-end process chips targeted high-performance applications.
To some extent, these generalities still hold true, but the distinctions are less clear. In the past, the transition from bipolar or biCMOS to CMOS involved a jump to finer-line geometries that yielded lower power dissipation (as well as smaller die and package size) without a supply change. Yet as the transition was made to 0.35-µm CMOS, now a mainstream process for high-speed ADCs, the supply voltage dropped from 5 to 3.3 V.
So today, as chip developers look to port their high-performance ADC designs to even finer line geometries, the redesigns are a little harder because of the shift to lower voltages, such as 2.5 V at 0.25 µm and 1.8 V at 0.18 µm. These advanced processes have the potential for higher performance. But converter development either takes longer because of the supply change or requires additional development effort (that is, more design engineers doing chip development), which raises cost. Meanwhile, biCMOS processes have undergone improvements to lower their power dissipation. Together, these changes have blurred the lines separating monolithic ADCs fabricated in biCMOS and CMOS. In the future, the best high-speed performance may even be achieved from the start in pure CMOS processes.
Nevertheless, a look at high-speed ADC performance today still shows clear divisions between mainstream CMOS and other processes. Costly 8-bit flash ADCs fabricated in bipolar processes currently achieve conversion rates of as high as 1.5 Gsamples/s. In contrast, 8-bit pipeline ADCs built in mainstream CMOS achieve only up to 250 MSPS. That's at considerably lower power levels and less cost than the faster flash converters.
Pipeline architectures from 10 to 14 bits are generally used with chips built in either biCMOS or bipolar processes for highest performance and CMOS for lower cost and power. At 10 bits, a biCMOS pipeline converter may run as high as 250 MSPS, whereas a mainstream CMOS version may go about as fast as 170 MSPS. At 12 bits, biCMOS is at about 200 MSPS, while the mainstream CMOS variation reaches perhaps 80 MSPS. At 14 bits, biCMOS pipeline ADCs achieve about 100 MSPS, but just 80 MSPS if implemented in CMOS.
Many of these performance barriers will be eclipsed in the coming year. At the same time, the barrier for high speed is moving to higher levels of resolution. A soon-to-be announced 15-bit ADC from Maxim will run up to 100 MSPS (Table 1).
In addition, 16-bit ADCs are now approaching the 10-MSPS criteria for high speed with improvements in successive-approximation-register (SAR) architecture converters. Even so, Fairchild plans to release a 16-bit, 20-MSPS ADC built in a pipeline architecture in the fourth quarter. AC specifications will be 89 dB for spurious-free dynamic range (SFDR) and 77.5 dB for SNR, while power consumption will be about 800 mW.
But conversion rates provide just one measure of converter performance. A variety of parameters distinguish one device from another, including power dissipation, IF bandwidth, ac specifications such as SNR and SFDR, dc specifications like integral nonlinearity (INL) and differential nonlinearity (DNL), digital interface options, level of integration, and package size. Cost is yet another significant consideration.
In developing high-speed ADCs, semiconductor vendors often trade these different factors to satisfy the requirements of a given application. For example, there's a continual push to lower power dissipation for a given level of high-speed ADC performance. However, sometimes the reduction in power comes at the expense of reduced SNR or SFDR.
Over time, the various applications tend to create niches within the high-speed ADC market. These niches then push converter development in different directions (Table 2). Medical imaging applications such as ultrasound provide many uses for high-speed ADCs. In terms of high-speed performance, ADCs that target ultrasound equipment fall somewhere between low-end and high-end performance.
A typical ADC used here might need 10-bit operation at only 60 MSPS but require 128 channels of data conversion. This creates a need for integration and low power dissipation, which then drives ultrasound-oriented ADC development.
Requirements for integration not only spur development of multichannel ADCs, they also lead to integration of ADCs with digital-to-analog converters (DACs) or other pieces of the signal chain. Texas Instruments is developing a 12-bit, 80-MSPS ADC that integrates a DAC, programmable gain amplifier, digital downconversion, and finite-impulse response (FIR) filters to create an analog front end.
At the same time, chip vendors seek to extend the ADCs' high-speed performance to higher levels of precision. With ultrasound equipment, the ADCs currently offered for these applications now fall in the 10- and 12-bit range. However, newer chips are moving to 14 bits.
Similar trends may be identified in other areas. In particular, manufacturers developing high-speed ADCs in the 10- to 14-bit range are developing pin-compatible families of parts that make it easier for customers to migrate to higher levels of precision while offering a choice of conversion rates.
LVDS options emerge
As ADCs move to higher speeds, chip makers are paying more attention to I/O requirements. One reason is noise. With standard CMOS outputs swinging to 3.3 or 5 V, the noise generated by the converter's output drivers can get injected back into the converter's analog circuitry. At conversion rates of 100 MSPS or higher, the effect of noise on ADC dynamic range becomes noticeable.
Several techniques exist for coping with high conversion rates. One is to demultiplex the outputs so two sets of output buffers are clocked at one half the ADC sample rate. This requires a separate synchronization signal and doubles the number of output lines. In some cases, this technique is applied with 4:1 demultiplexing, which quadruples the number of output lines. Another approach to noise reduction is to implement a digital supply that operates down to a low voltage, such as 0.5 V.
But the trend among high-speed ADCs now being developed is to address noise problems by adopting LVDS for the digital interface. Because LVDS exhibits a reduced signal swing of ±350 mV, less noise is injected back into the ADC. Moreover, LVDS generates less EMI that can potentially interfere with other circuits. The reduced EMI results from LVDS' lower voltage swing and the tendency of differential outputs to generate opposing electromagnetic fields that cancel.
Differential signalling also provides better noise immunity for the converter's digital outputs, as it rejects common-mode interference. These benefits are accrued whether the output is implemented as parallel LVDS (typically two lines per bit) or serial LVDS (only two output lines). Naturally, LVDS noise suppression and noise immunity don't come free. LVDS lines must be terminated with 100-Ω resistors, so there are dc output currents. These result in static power dissipation that's not present with CMOS outputs. Yet at higher conversion rates, the power consumed by CMOS outputs may exceed that of LVDS.
Vendors may disagree on which output option consumes more power. However, that issue may be a secondary one since the ADC's analog core typically consumes more power than the digital output drivers. In terms of I/O requirements, parallel LVDS typically doubles the number of output lines, yet 2:1 demultiplexed CMOS outputs do so as well. Moreover, serial LVDS reduces output requirements to just two lines. A combination of serial and parallel outputs can also reduce the number of outputs from that required by parallel LVDS.
Most vendors seem to agree that LVDS becomes compelling at about 100 MSPS. Consequently, some vendors are opting to implement LVDS at this sample rate or higher, though at least one chip maker implements LVDS at slower speeds. So far, only a few vendors have introduced ADCs with LVDS outputs, and nearly all of these have been the parallel type. But in time, more ADCs with parallel LVDS are expected.
Nevertheless, National Semiconductor recently introduced a 10-bit, 40-MSPS ADC with serial LVDS output (see "ADC10S040" in Table 1). This converter is likely to be the first of many with serial LVDS, as this option reduces pin counts compared to CMOS and parallel LVDS. Another benefit may be even lower noise than parallel LVDS (Table 3).
Furthermore, some vendors are looking to exploit the reduced pin counts of serial LVDS at speeds below 100 MSPS. This advantage can either be applied to single or multichannel devices. In either case, serializing the digital outputs effectively raises the signaling speed, so LVDS becomes a requirement again. As a result, a few manufacturers expect to apply serial LVDS at conversion rates of as low as 10 MSPS.
LVDS also affords system designers greater flexibility in laying out their pc boards. In the past, semiconductor vendors routinely cautioned that traces for the converter outputs should be kept within a few inches. This required that ASICs and FPGAs be kept close to the ADC.
To an extent, these same guidelines have been stressed with ADCs that employ LVDS. Such restrictions on LVDS for data converters contrast with the relaxed guidelines for LVDS in standard digital applications, where it has been employed at distances up to 10 m. Although minimizing trace lengths for ADC outputs is still good design practice, converter manufacturers have discovered that driving LVDS lines up to 10 or more inches doesn't affect converter performance. System designers can exploit this capability to more easily partition the analog and digital portions of their designs.
|Need More Information?|
Linear Technology Corp.
(408) 432-1900, ext. 2453
Maxim Integrated Products