The basic premise of ASICs has remained the same over the past decade—integrate as much of the system as possible onto one chip to reduce chip count and system cost. Yet the ability to integrate more on one chip has changed. Advances such as 90-nm lithography are pushing this trend. So is the ability to deposit 10 or more layers of copper metallization to interconnect all of the transistors.
Today, ASIC vendors and foundries offer a broad array of design rule sets that bring many options. These options range from low-cost commodity CMOS processes with features over 180 nm to advanced high-performance processes with features down to 90 nm. Thus, companies can craft chips with upper gate counts of about 10 million in addition to several million bits of either static RAM or embedded DRAM.
There already is some limited use of silicon-germanium processes for high-performance, mixed-signal and RF applications, such as wireless local-area networks and Bluetooth. However, many companies are trying to create such chips with just CMOS to keep the cost as low as possible. In another year or two, when design rules drop to 65 nm, it will be possible to produce chips with double to quadruple the complexity of today's highly integrated solutions.
Waiting in the wings are even higher-performance processes that leverage silicon-on-insulator (SOI) and various versions of silicon-germanium and strained-silicon technology. They will achieve multi-gigahertz clock speeds, combined analog and digital functionality (especially RF circuits), and low-leakage current to reduce power consumption.
Freescale Semiconductor and IBM Microelectronics already use commercial SOI processes to produce PowerPC and AMD's x86-compatible processors. Foundries like Chartered Semiconductor in Singapore and TSMC in Taiwan have licensed SOI processes from IBM and Freescale, respectively, and will offer the processes as another option to their fabless customers.
GROWING ASIC COSTS
Most experts agree that it will cost roughly $10 million to develop a high-end ASIC, from concept to fabrication. This implies that the market for the final ASIC had better be large enough to amortize development cost across a large number of chips. In the consumer market, where millions of units are often sold, the amortized development cost only adds a few dollars to silicon cost.
But if the market only supports hundreds of thousands of units, then as much as $100 may have to be added to each chip's cost to recoup the development investment. And if the chip doesn't function as desired the first time through the fab, a new mask set usually must be generated to implement the engineering fixes. This drives up chip cost even further and delays its market entry.
Depending on the complexity of the fix, a few mask layers or an entire mask set may have to be redone. A full mask set on a 90-nm process could end up costing as much as $1 million. Redoing the masks also is expensive and time-consuming, and it often may have to be done more than once. If the mask changes are limited to just metal wiring changes, revision costs can usually be kept to perhaps only 20% of the full mask set cost, minimally impacting the chip cost.
AN OPEN WINDOW
These cost and time issues have opened a window of opportunity for a solution between the custom ASIC and the field-programmable gate array. Seizing that fortuity are structured and platform ASICs, which fill that middle ground. They're available from many traditional ASIC suppliers, such as AMI, Fujitsu, LSI Logic, and NEC, as well as from smaller companies like eASIC, Faraday, and Palmchip.
By resurrecting some aspects of gate arrays—pre-manufacturing the silicon with the basic logic elements, memory, and other predefined resources—and using only two to five metal layers to define the functionality, these chips provide a quick time-to-market approach. And, they can be more cost-effective than cell-based ASICs if large volumes aren't required. On the other end of the spectrum, they also can offer a more cost-effective and higher-performance solution than FPGAs when production volumes go past several thousand units.
About 20 or more players make up the platform/structured ASIC market, offering a wide range of architectures and on-chip resources. In a few cases, the vendors provide a family of different predefined chips featuring various combinations of gates, memory, I/O cells, embedded CPUs, and specialized I/O pre-integrated as hard cores in the silicon. Specialized I/O may include double-data-rate DRAM controllers, Ethernet media access controllers, or multigigabit serializer-deserializers (SERDES).
In a majority of cases, though, vendors offer just the base set of functions—logic, memory, I/O cells, and some phase-locked loops—pre-integrated. Everything else is implemented from a library of soft cores that are configured on-chip using the metallization and the uncommitted logic gates.
Although many designers find the differentiation between structured and platform ASICs somewhat elusive, the most common differentiator is that the platform ASICs tend to offer more application-optimized functions. Such functions may include high-speed SERDES channels, embedded CPUs, DSP support, or other blocks. This enables the optimized chip to deliver higher levels of integration and performance versus the more generic structured ASIC.
LEVERAGE THE LIBRARIES
Whether you're considering a cell-based solution or a structured or platform solution, the library of intellectual property (IP) available from the ASIC vendor is a key consideration. Vendors can never slack off in developing new blocks of IP, because the system designers always require new functions, such as PCI Express, Gigabit Ethernet controllers, and DDR2 SDRAM controllers and interfaces.
To provide added value, the ASIC vendors are going beyond just adding more cores by prequalifying core supersets. Toshiba takes such an approach in its SoCMosaic offering. This cell-based approach includes various preconfigured blocks in its libraries. LSI Logic does the same thing in its RapidChip Platform ASIC and full ASIC offerings.
These preconfigured "superblocks" reduce the system designer's time to complete a system-on-a-chip (SoC) design. A potential penalty, though, is that the solution may not be 100% tuned for the end application. If a design calls for a CPU, most likely it also will need a cache controller, caches, a memory controller, a DMA controller, and perhaps an interrupt controller to form a full CPU subsystem.
However, not all features desired by the engineer may be included. Or perhaps too many features are present, imposing some extra silicon overhead. By pregrouping the IP blocks needed to implement such a subsystem, ASIC vendors create two benefits: significant reductions in design time and post-design verification.
As chips get more complex, this post-design verification becomes an even more important portion of the overall chip design process. That's because it helps, as much as it can, to ensure the design is correct. This reduces the probability that the chip will require a second mask set to fix some bugs not caught the first time around.