Digital Circuits Break New Performance Barriers

Jan. 12, 2004
Advances in all aspects of silicon manufacturing are delivering digital and mixed-signal circuits that run faster and pack more functionality than anyone had predicted just a few years ago.

Advances in all aspects of silicon manufacturing are delivering digital and mixed-signal circuits that run faster and pack more functionality than anyone had predicted just a few years ago. This year, new processors will run at 4 GHz. Others will pack dual CPUs and 400 million transistors. Dynamic RAMs will pack 1 Gbit. Flash memories will hold up to 8 Gbits. And, FPGAs will have up to 10 Mgates. This year will also debut the first 4-Mbit magnetoresistive RAMs, memory interfaces that push data-transfer rates to over 6 Gbits/s per pin, software-configurable DSP

architectures that deliver throughputs of over 20 GFLOPS, and a wide range of ASIC "platforms" that will help lower the cost of implementing system-on-a-chip (SoC) solutions.

Throughput is the name of the game for CPUs, DSPs, and both DRAMs and SRAMs. And to reach even higher throughputs, designers are applying lots of parallelism and adding new interfaces to move the data faster from chip to chip. Some examples of what's coming can be seen at next month's International Solid-State Circuits Conference, San Francisco, Calif. In the session on processors, Sun Microsystems will divulge details of a dual-core UltraSPARC processor, while IBM Corp. will unveil the multithreaded design used in its Power5 microprocessor.

Large on-chip caches will be seen on many CPUs, with capacities of 4 to 8 Mbytes expected to sample in 2004 and 2005. Such large caches, implemented on 90-nm processes, should help push transistor counts on a CPU past the 400 million mark in 2004 and to over half a billion in 2005.

In memory interfaces, front-side buses have already moved to 800 MHz and use double-data-rate (DDR) DRAMs, which now operate at 400 MHz, with 533-MHz DDR II memories expected later this year. Competing with the DDR DRAMs is the Rambus RDRAM, which currently operates at 800- to 1200-MHz data rates, with 1600-MHz devices expected to sample by year's end. For systems where low latency is key, such as high-speed networking, specialty DRAMs like the reduced-latency DRAM can fill a niche, providing higher capacities than static RAMs.

Despite the slowdown from fourfold DRAM capacity increases every two years to twofold increases every two years or so, the 1-Gbit DDR DRAM generation has finally arrived in limited production. But whether the density is 512 Mbits or 1 Gbit, new DRAMs will start to sport higher-speed interfaces that push per-pin data-transfer speeds to 6 Gbits/s and beyond. Expect to see first samples of such DRAMs later this year from at least one manufacturer.

Static-RAM densities also have hit a plateau at 16 Mbits, though a few companies may sample 32-Mbit devices this year. Yet speed is still the buzz for SRAMs. Shorter access times and faster interfaces remain key goals for many SRAMs. To speed up data-transfer rates, most SRAM vendors now offer versions with zero delay for bus turnarounds. Also, some suppliers have added quad-data-rate interfaces, in which separate input and output ports can both be simultaneously active to further increase memory bandwidth.

In addition to denser flash memories that will hit the 4-Gbit mark, this year will see a kaleidoscope of new nonvolatile memory technologies, a few of which will actually be sampled this year. Ferroelectric technology has been commercially available in low-density devices, and this year will usher in the sampling of 1-Mbit devices.

The first commercial 4-Mbit magnetoresistive RAM will also be released. The MRAM combines most features of an ideal nonvolatile memory—it can be read or written like an SRAM, it retains data indefinitely without applied power, and it doesn't have a wearout mechanism. The main drawback is the limited density possible today. However, higher-density devices are on the drawing boards.

FPGAs and ASICs are also benefiting from process advances. Samples of FPGAs with upwards of 10 million gates will be out late this year. To counter the high cost of ASIC mask sets and the complex design verification procedures, many ASIC suppliers now offer platform solutions (sometimes referred to as structured ASICs). These chips have some predefined logic on board that can help reduce the design and verification time as well as cut the number of custom masks from over 20 to less than five. This drastically reduces the cost to obtain prototypes versus a full-custom design.

Basic logic functions ranging from gates to bus interfaces are still around. What has gained in popularity is the use of single-element devices such as a NAND gate or inverter housed in a nearly microscopic package. The small packages reduce board area and allow pinpoint placement of the logic in the signal flow, reducing wire lengths and ultimately improving overall performance.

See associated figure

About the Author

Dave Bursky | Technologist

Dave Bursky, the founder of New Ideas in Communications, a publication website featuring the blog column Chipnastics – the Art and Science of Chip Design. He is also president of PRN Engineering, a technical writing and market consulting company. Prior to these organizations, he spent about a dozen years as a contributing editor to Chip Design magazine. Concurrent with Chip Design, he was also the technical editorial manager at Maxim Integrated Products, and prior to Maxim, Dave spent over 35 years working as an engineer for the U.S. Army Electronics Command and an editor with Electronic Design Magazine.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!