The onslaught of digital technology, bolstered by advances in process technology, lithography, metallization, and packaging, is now yielding circuits that are faster and more complex than ever before. Take, for instance, the 64-bit microprocessors that run at up to 3 GHz and pack close to 100 million transistors. Some DSPs deliver multigigaflop throughputs. Dynamic RAMs feature capacities of 512 Mbits and data transfer rates of 666 Mbits/s on each I/O pin. Flash memories have hit capacities of 1 to 2 Gbits. Notwithstanding, some ASICs have gate counts exceeding 10 million, and FPGAs now boast 3 million gates and multigigahertz I/O ports.
Over the next few years, clock speeds for desktop and server CPUs will move from 3 to 5 GHz. Higher levels of integration will let designers place more than one CPU on a chip and perhaps even a third level of cache. In turn, there are fewer off-chip accesses, enabling processors to deliver higher throughputs. Embedded processors are also climbing the performance ladder. Last year saw the introduction of a four-CPU-on-a-chip 64-bit processor. Many companies are integrating dozens of 32-bit embedded processor cores onto application-specific custom chips that must handle highly parallel computations. DSP chips are also migrating to highly parallel architectures in one of two ways. In some cases, they're using very-long-instruction-word (VLIW) approaches. Otherwise, they're applying the more traditional single-instruction/multiple-data (SIMD) array processing on a chip that contains tens to hundreds of processing elements.
While compute performance is on the rise, you can expect flash-memory capacities to hit single-chip densities of 4 Gbits thanks to new memory cell structures. The two main approaches on this front include the multilevel cell and mirror bit. Multilevel cell allows two data bits to be stored in each cell by encoding the bits into four charge levels. The mirror-bit approach stores two bits in each cell by storing each bit on opposite ends of an insulated gate.
Expect next-generation DRAMs to accelerate, despite the fact that DRAM densities may not jump to 1 Gbit. The memories will incorporate a second-generation double-data-rate (DDR) interface, vaulting I/O bandwidths past 800 Mbits/s per pin. Even faster interfaces are in development to push the bandwidth beyond 1 Gbit/s per pin.
Though not as visible as DRAMs, static RAMs also continue to raise the density bar: 16-Mbit chips are now available, with 32- and 64-Mbit chips expected over the next few years. Faster interfaces remain a key aspect for many SRAMs. Many companies have added DDR interfaces and zero delay for bus turnarounds, but that isn't fast enough for some applications. Quad-data-rate interfaces, in which separate input and output ports can both be simultaneously active, further increase memory bandwidth. Still other interface options such as the SigmaRAM promise to shorten the access times, enabling the memories to work at 400 MHz and beyond.
To make these next-generation circuits possible, researchers are toiling away at improving lithography systems. At the same time, new semiconductor processing schemes that employ strained lattice structures and silicon-on-insulator will improve circuit operating speeds and lower power consumption. On-chip metallization for many of the complex processors and ASICs has already shifted from aluminum to multilayer copper wiring. Efforts are now focused on reducing the dielectric constant of the insulating materials to boost circuit performance.
In the memory arena, new nonvolatile technologies offer designers different options for their systems. Ferroelectric technology is popping up in a number of standalone nonvolatile RAM devices and experimental magnetoresistive memory cells. It shows the promise of eventually leading to an ideal nonvolatile memory chip—a memory that can be read or written to like a RAM, retains data indefinitely without applied power like an EEPROM or flash device, and does not experience any wearout. Such a device could be used in applications that today require combinations of RAM and nonvolatile devices.
Mounting chip complexity usually calls for greater amounts of I/O. However, simply adding additional pins to handle more parallel signal buses is becoming counterproductive. Wide, high-speed buses require careful layout and shielding, making them difficult to implement on circuit boards. To avoid these problems, one of the hottest trends is to integrate serializer-deserializer (SERDES) blocks onto the chips to concentrate wide buses into a few high-speed serial channels. This reduces pin count and power and simplifies circuit-board layout.
That's not to say that parallel buses will disappear. To meet the needs of next-generation high-speed buses, bus interface circuits are evolving to meet the needs of high-speed systems. For instance, low-voltage differential-signaling (LVDS) is being used to deliver clean, high-speed signals across backplanes, or even move data from chip to chip.
Simple logic functions are still around. Gates, flip-flops, and other technologies can still be purchased as standalone components. What has changed is the move toward single-element packaging. Rather than sell dual- or quad-gate functions in a 14- or 16-pin package, a single gate or flip-flop in near-microscopic surface-mount packages will do the trick. The small packaging option reduces circuit-board space, allows pinpoint placement of the logic in the signal flow, and reduces the length of circuit-board traces.