Think Small: Can You Meet The Design Challenges At 90 nm And Below?

Feb. 3, 2005
While the transition from 130 to 90 nm has magnified some issues, scaling down to 45 nm and beyond will spawn even tougher design challenges.

Transitioning from 130 to 90 nm didn't initially present the challenges that popped up when going from 180 to 130 nm. But as more designs move to 90 nm (0.09 µm), engineers are once again running into various anomalies and quirks.

"Many of the issues raised at 130 nm are now very evident at 90 nm," says Primal Buch, general manager of the Analysis Business Unit at Magma Design Automation. More functional and physical design issues are appearing as well.

What looms ahead at 65 and 45 nm and below—deep-submicron (DSM) geometries by the design industry's definition—presents even bigger challenges from a designer's perspective. The 2003 International Technology Roadmap for Semiconductors (ITRS) shows that by 2018, an IC operating at 50 GHz will have 14 billion integrated transistors.

At today's 90-nm frontier, a chip can have some 100 million gates, up from about 30 million gates for 130-nm designs. Most current designs peak out at about 10 million gates. (In-house captive designs like those for Intel's latest microprocessors have 50 million gates and more.) Lithography mask costs double every technology generation. And, the design team involved in producing a given IC gets more crowded with every new technology generation (Fig. 1).

One evident trend is the widening gap between silicon capacity and design productivity. This is related to a number of factors, including hardware-software coverification, IP reuse and integration, physical design, design for manufacturability (DFM), interconnects, noise sensitivity, power, and thermal solutions—all of which become more pronounced at smaller design geometries. It simply costs more to design and manufacture denser chips.

According to Moore's Law, the number of transistors on a chip increases approximately 58% per year. Meanwhile, design productivity, facilitated by EDA tool improvements, grows only 21% per year. These numbers have held constant for the last two decades. As design feature sizes shrink and more transistors are squeezed into a system-on-a-chip (SoC) IC, the sheer number of on-chip devices far outstrips a design team's ability to harness the full benefits of all the transistors.

A study by the Sematech (SEmiconductor MAnufacturing TECHnology) consortium shows that the number of transistors per design person month is growing at a 21% annual compound rate, even though the number of transistors per chip is increasing at a compound annual rate of 58%. Yet for all of these looming challenges, EDA design tools have reasonably kept pace with designer needs.

BRISK ACTIVITY AT 90 nm Design activity at 90 nm is strong. In fact, it's the fastest-growing area for new design activity, according to Walden Rhines, chairman and CEO of Mentor Graphics Corp. and chair of the EDA Consortium.

EDA tools for 90-nm nodes are largely keeping pace with design needs, although many in the EDA tool industry argue in favor of more. Joseph Sawicki, Mentor Graphics' general manager and vice president, says that "90-nm designs are likely to be more stable than 130-nm designs. However, there are some changes to EDA tools that have not been fully implemented at 90 nm."

EDA tool makers generally agree that there's a need for more scalable logic-synthesis optimization engines; more predictable and robust synthesis techniques; and more integrated modeling, analysis, and synthesis capabilities. There also is a need for greater design reuse. Register-transfer-level (RTL) to graphic design system II (GDSII) EDA tools provide some answers by integrating modeling, synthesis, analysis, and optimization capabilities within a single environment. But some EDA experts feel that more dependable RTL-to-GDSII flows to support behavioral synthesis are required.

Yet the move to 90-nm designs won't demand as much retooling for EDA systems as the earlier move from 180- to 130-nm designs. It's more of an incremental step than the evolutionary one needed for the previous-generation technology. Still, better tools are a must for power- and signal-integrity analysis, DFM, and electronic system level (ESL) designs. (For a more in-depth discussion on power-design issues, see "Power Integrity Comes Home To Roost At 90 nm," p. 55.)

"EDA tool makers have had more time to adapt going to 90 nm from 130 nm, unlike the case of going from 180 nm to 130 nm," says Magma Design's Buch. "However, we need better delay modeling tools, which will become more important at 65 nm and beyond."

Some in the EDA industry maintain that present EDA tools for 90-nm designs will still be useful at 65 and 45 nm. But this requires more tool enhancements, which can become costly.

EDA design tools have gone from transistor- to gate-level designs, and now RTL designs, with each increasing design generation. On the horizon lie very complex designs with large intellectual-property (IP) blocks and buses. Some of these are already under investigation at 65- and 45-nm arenas.

"We pioneered an approach called virtual prototyping on the chip," says Eric Filseth, VP of IC digital product marketing at Cadence Design Systems. "It allows an engineer to see what a design looks like before it is broken up into pieces."

"We're going from design creation to design assembly," says Serge Leef, general manager of Mentor Graphics' SoC Verification Division. The trend is toward platform-based designs, in which a number of IP blocks are integrated into a platform and many platforms are subsequently linked together using complex buses.

In fact, experts foresee very complex chip designs with 200 to 300 IP blocks per chip, a serious jump up from today's 15 to 20 blocks per chip. Forecasts indicate that while the number of standard-cell and custom designs will grow, the much larger growth is expected in IP components that will proliferate with future DSM designs (Fig. 2) .

This will make timing issues even more critical. Fulcrum Microsystems is preparing for this challenge by offering IP blocks for asynchronous clocking instead of the usual designs with synchronous clocking schemes. Its 16-port crossbar switch has shown new levels of performance at 1.4-GHz rates in a validation chip.

"At 90 nm, leakage is a big issue, which this chip can deal with. Our chip is the only one on the market and supplies enough of a time window to hold data while maintaining power efficiency," explains Abe Akumah, a design engineer at Fulcrum.

"Drop in" RTL solutions that enable SoC designers to connect IP cores better, as well as manage data flows on the chip, are now available. One example is the SMART interconnect technology developed by Sonics Inc. This decoupled interconnect solution works through the Open Core Protocol Interconnect Partnership (OCP-IP) interface standard, which was adopted by the company. Available through an IP business model, it lets designers create the footprint for an SoC IC's architecture.

ESL AS A PRODUCTIVITY BOOSTER Designing complex chips under 90 nm is where hardware and software coverification comes in, due to the significant architectural challenges stemming from signal-processing, control-processing, and interface issues. As a result, more advanced and smarter ESL tools are required.

"We need to design at the system level, which is an ongoing process," says Mentor Graphics' Sawicki. Most EDA experts agree that ESL design automation will significantly boost productivity in silicon.

Besides denser chips and faster clock speeds at 90 nm, another important factor at this level is signal integrity. This issue is much more important than it was at 130 nm. Other constantly challenging issues include DFM and yields. "The wires between chips are so short that they bring up new issues as they get even shorter," adds Cadence's Filseth.

Copper interconnects cause both width and thickness variations on finished chips. Most DFM tools have focused on copper width variations. However, copper thickness variations, which can range from 20% to 40% across a chip, are just as important, and few tools can address this problem. These varying thicknesses cause timing-delay problems, forcing designers to use buffers on the chip. Such a move increases power and necessitates the use of "guardband" techniques to account for delay variations.

Gate delays used to be the major design concern, with wiring delays secondary by comparison. Now the scene has flip-flopped (Fig. 3). At 90 nm, wiring accounts for some 75% of the overall delay. At DSM geometries below 90 nm, designers will be forced to shift their focus from logic optimization to wire optimization.

A major migration is under way from aluminum to copper when switching from 130 to 90 nm. No doubt, all of the attendant copper problems will emerge, such as "dishing" (the changing of the copper's shape depending on what material surrounds it) and wiring delays. "Most of the timing is in the wires, not in the gates," adds Filseth.

Copper interconnect resistivity increases by several times over its bulk value at DSM designs below 90 nm. This increase can be nearly 2 µm/cm at room temperature.

Designers of 90-nm chips are also finding out that capacitance from the wiring can vary by large margins. It's very hard to predict, which makes it more difficult to set timing margins. Some designers report that wiring capacitance can constitute about two-thirds of a chip's total capacitance.

"Interconnect variability is a very serious issue, particularly as you go down to 65 and 45 nm," explains Jamil Kawa, group director of the Advanced Technology Group at Synopsys. "The big challenge is in interconnect thickness. We're doing work on software routines for optimal metal filling using chemical mechanical polishing (CMP)."

Some IC designers are switching from Monte Carlo design optimization based on static timing analysis to statistical analysis to deal with timing and delay problems. "On the timing side, at 90 nm we see more on-chip variations. The same gate within the same die area will behave differently, depending on how it is defined by optical lithography," says Magma Design's Buch.

"We've gone from EDA tools for yield enhancement at 180 and 150 nm to yield creation at 90 nm and beyond," says Mentor Graphics' Sawicki. "This is becoming a more critical issue, which requires more model-based optical-proximity-correction (OPC) techniques. We see a greater approach to statistical modeling instead of binary modeling."

SUPPRESS MANUFACTURING COSTS To help bring down complex silicon design and implementation costs, the EDA industry is turning to more robust and comprehensive DFM methodologies. Greater amounts of abstraction levels with smaller line geometries make DFM a must.

DFM is needed at 90 nm and beyond for verification steps like conceptual RC extraction to account for metal thickness variations and for contextual design rules that deal with wire widths and spacings. Even the nuances and subtleties of layout-dependent transistor Spice models need a DFM approach.

The DFM methodology includes the development of better resolution-enhancement technologies (RETs), OPC, and phase-shift masks (PSMs). It also includes more efficient design approaches that will provide higher yields and minimize the number of silicon "re-spins" to accommodate design engineering changes.

RET techniques are needed for handling more silicon layers as design line widths shrink. At 65 nm, RET improvements will enable complex chip designs with as many as 28 layers, up from 13 layers at 90 nm (Fig. 4).

Another issue is process variability. IC designers have always had to wrangle with "lot-to-lot" and "wafer-to-wafer" process variables. But at 90 nm, they're now facing "across-the-die" variability issues.

NEEDED: A UNIFIED DATABASE Ultimately, DSM designs will require a unified database model. This is important because future DSM designs will contain not only a large amount of digital circuitry, but also critical mixed-signal analog circuitry. A unified data model would hold all of the information about every facet of an IC's design: schematic netlist, netlist, and layout representations; digital and analog representations; and cell-based and custom representations (Fig. 5).

The complexity of DSM designs is forcing EDA manufacturers to work more closely with foundries like Taiwan Semiconductor Manufacturing Co. (TSMC) and Chartered Semiconductor Manufacturing Co. New software reference flows and design-enablement kits provide enhanced capabilities to power management, design for test (DFT), DFM, flip-chip design, and chip-to-package designs. They're streamlining the path from design to volume manufacturing.

Economics also play a major role in DFM designs. The consumer sector now dominates volume production of electronics products. This sector expects electronic products to start at very low price levels, instead of dropping in gradual steps, as IC manufacturers ramp up production and gain greater yields.

With more expensive design efforts, shorter product lifetimes, shorter time-to-market windows, and more competition, there's simply no time for IC manufacturers to work out the bugs in the design and production processes. An SoC must work right the first time it comes off the production line. There's very little room for error.

The cost of very complex chip designs for the consumer market is exemplified in one of Broadcom's latest innovations. The company reports that it has developed an 80-million-transistor multiprocessing chip for set-top boxes that required 1000 people months.

Design reuse offers one answer to this challenge. The idea is to develop a chip that can be used for more than one application in a given market. Thus, chip companies can better amortize the cost of producing complex SoC ICs at DSM geometries.

But why the rush to 65 nm and below? "Older technologies never die. They can be just as useful and cost-effective at 180 than 65 nm or even 45 nm," says Synopsys' Kawa. He believes that 90-nm designs, and even designs with larger line geometries, will be useful for quite some time.

About the Author

Roger Allan

Roger Allan is an electronics journalism veteran, and served as Electronic Design's Executive Editor for 15 of those years. He has covered just about every technology beat from semiconductors, components, packaging and power devices, to communications, test and measurement, automotive electronics, robotics, medical electronics, military electronics, robotics, and industrial electronics. His specialties include MEMS and nanoelectronics technologies. He is a contributor to the McGraw Hill Annual Encyclopedia of Science and Technology. He is also a Life Senior Member of the IEEE and holds a BSEE from New York University's School of Engineering and Science. Roger has worked for major electronics magazines besides Electronic Design, including the IEEE Spectrum, Electronics, EDN, Electronic Products, and the British New Scientist. He also has working experience in the electronics industry as a design engineer in filters, power supplies and control systems.

After his retirement from Electronic Design Magazine, He has been extensively contributing articles for Penton’s Electronic Design, Power Electronics Technology, Energy Efficiency and Technology (EE&T) and Microwaves RF Magazine, covering all of the aforementioned electronics segments as well as energy efficiency, harvesting and related technologies. He has also contributed articles to other electronics technology magazines worldwide.

He is a “jack of all trades and a master in leading-edge technologies” like MEMS, nanolectronics, autonomous vehicles, artificial intelligence, military electronics, biometrics, implantable medical devices, and energy harvesting and related technologies.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!