Digital ICs: ASICs

Jan. 7, 2002
From Gates To Megagates, ASICs Flourish BY THE LATE '60s, CHIP MAKERS SUCH AS FAIRCHILD SEMICONDUCTOR, Motorola, and Texas Instruments (www.fairchildsemi.com, www.motorola.com, www.ti.com) were creating hundreds of logic...

From Gates To Megagates, ASICs Flourish By the late '60s, chip makers such as Fairchild Semiconductor, Motorola, and Texas Instruments (www.fairchildsemi.com, www.motorola.com, www.ti.com) were creating hundreds of logic circuits per year, compiling collections of thousands of simple logic functions—gates, flip-flops, decoders, buffers, and so on. The challenge of designing every circuit from scratch was starting to bog down the system as logic functions became more complex. Companies then began using small generic arrays of bipolar transistors, masterslices, that could be interconnected to form gates and flip-flops.

As system designs grew even more complex, power consumption and board complexity became critical. Designers tried to meet nearly impossible system constraints, cramming hundreds of ECL, TTL, or CMOS logic packages on the system logic boards. But in 1974, International Microcircuits introduced the first commercial CMOS array of uncommitted gates-on-a-chip—a gate array.

Shortly after the availability of gate-array technology, programmable logic emerged in the mid-'70s. Over the next few years, CMOS, bipolar, and gallium-arsenide gate arrays gained a lot of momentum. By the mid-'80s, chip gate counts hit tens of thousands of gates for CMOS, about 50,000 for ECL, and up to about 5000 gates for GaAs. Design reuse and predesigned standard cells emerged as an alternative to gate arrays.

Though gate arrays were flexible, they had several notable limitations, especially poor area efficiency when memory blocks were implemented using gates. One solution integrated high-density generic memory blocks into the base silicon.

Once gate-array suppliers saw they could combine some aspects of standard cells with the uncommitted nature of the gate array, choices started to swell. In addition to memory blocks, gate-array suppliers preintegrated other popular functions.

From the mid-'80s through the '90s, field-programmable devices ate away at the lower end of the gate-array market. Many gate-array suppliers dropped lower-density arrays and concentrated on system customers needing 100 kgates and up. Improvements in CMOS performance put the squeeze on bipolar gate arrays while offering much higher gate counts. Thus CMOS became the mainstream digital ASIC process.

As transistor features continued to shrink in the late 1980s, design complexities skyrocketed. Larger predesigned blocks, dubbed megacells, were added. By combining the use of megacells, CPU cores, and other blocks of IP, designers in the mid- to late '90s could build full systems-on-a-chip (SoCs). The ability to use many different blocks of IP took many years of work by standards committees, ASIC vendors, users, and design-tool suppliers. Major improvements in design tools eased the design task.

Today, designers can readily implement chips with 5 million gates. In two years, chip densities will exceed 10 million gates. Chips are blazing fast as well. Today's ASICs often operate with clock speeds of 500 MHz and offer specialized I/O interfaces that operate at over 3 GHz. Still higher speeds are ahead, with clock rates beating 1 GHz and high-speed serial interfaces hitting the 5-GHz mark as on-chip gate dimensions drop below 0.10 µm.

See associated timeline.

Design feature size will continue to shrink below 0.13 µm, which will make possible ASICs that pack well over 10 million gates and multiple megabytes of memory. It will also permit chips to achieve system clock speeds of well over 1 GHz, with selective I/O functions able to achieve data transfer rates of up to 5 Gtransfers/s.

Larger, more complex blocks of intellectual property will be available to build system solutions on-chip. Today's 32-bit CPU cores will give way to 64-bit and VLIW processors that can take on ever-more-challenging computational tasks.

Improved standards for core connectivity and on-chip buses will come from the VSIA (www.vsi.org) and other organizations to help simplify the design of chips that pack multiple blocks of IP from multiple suppliers.

More capable EDA tools, from design languages to final layout, routing, and verification, will make possible first-time-right designs that not only meet functional requirements, but also the clock speed demands of future systems.

Expect more extensive use of high-speed serial interface cores for chip-to-chip interconnections and chip-to-system applications such as high-speed serial backplanes and interconnection fabrics such as InfiniBand.

Designers will incorporate larger amounts of on-chip SRAM and DRAM to reduce overall system chip count and maximize memory bandwidth and ASIC performance. By integrating the memory on-chip, designers can use extremely wide memory buses, say 256 to 1024 bits wide, which wouldn't be practical with off-chip memories.

Improved mixed-signal capabilities will be possible, both through the use of standard CMOS processes and by designing with new processes such as silicon-germanium to integrate both digital and RF circuits on the same chip.

Standard-cell design approaches will merge with field-programmable gate-array (FPGA) technology. Programmable blocks on an ASIC speed up chip design, and the last few thousand gates can be configured by downloading the bit pattern into the on-chip FPGA block or blocks.

Increased use of more levels of metallization will improve on-chip connectivity and performance. ASIC designs are starting to migrate away from aluminum interconnects to copper. Production-proven processes, such as chemical-mechanical polishing to planarize the wafer surface, and the ability to deposit copper and low-k dielectric materials, will allow designers to leverage 10 or more metal layers to interconnect tens of millions of gates.

There also is a movement toward lower operating voltages. As the number of gates on a chip increases, so does power consumption. Lowering today's 2.5-V operating voltage to 1.5 V and even lower will grant designers significant reductions in power consumption.

See associated timeline.

About the Author

Dave Bursky | Technologist

Dave Bursky, the founder of New Ideas in Communications, a publication website featuring the blog column Chipnastics – the Art and Science of Chip Design. He is also president of PRN Engineering, a technical writing and market consulting company. Prior to these organizations, he spent about a dozen years as a contributing editor to Chip Design magazine. Concurrent with Chip Design, he was also the technical editorial manager at Maxim Integrated Products, and prior to Maxim, Dave spent over 35 years working as an engineer for the U.S. Army Electronics Command and an editor with Electronic Design Magazine.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!