Designers of standard-cell and custom ASICs and SoCs are all understandably excited about the on-going move from 130-nm process technologies to 90-nm processes. Some are planning ahead for further silicon shrinks to 65 nm. And why wouldn't they?
Process advances will keep Moore's Law alive for the foreseeable future. They'll bring vast gains in density and computing power. New generations of super-integrated handhelds, with enhanced wireless capabilities, are sure to follow. Unfortunately for the unwary, so are migraine headaches.
As silicon processes shrink, problems quickly mount for designers. The dimensions of silicon features may change, but the laws of physics, and those of Georg Simon Ohm, are immutable. The assorted undesirable electrical and physical effects of process shrinks on the functionality of leading-edge IC designs have come to be known collectively as "nanometer effects." As the industry plunges deeper into the nanometer design realm, these effects are the most vexing issues facing IC designers and the EDA tool vendors who must provide the means for chasing down and correcting them.
At silicon process technologies of 250 nm and below, interconnect delays had begun to outweigh gate delays in their relevance to timing-closure calculations. But with 90-nm processes poised to enter the mainstream, interconnect delays are taking new and much more ominous forms. IR voltage drops, crosstalk and inductance, noise, and power integrity all conspire to make post-layout verification a nightmare.
The root cause of many of these issues, of course, is simply a matter of proximity. A number of years ago, when processes crossed the 1-µm barrier, simple RC delays in the interconnects were seen as a significant impediment to performance. Surely they were, but how much more of a factor are they at 90 nm? The answer is a lot (Fig. 1). IR drop issues, stemming from smaller, more resistive wires carrying greater currents (coupled with falling supply voltages), cause increasingly severe timing delays. Similarly, coupling capacitance is much more of a factor as we move down the process ladder. With the number of signal nets exploding, interference is a fact of life in nanometer design. Most of these issues can be traced directly to the density of circuits at 130 nm and below.
Another factor to consider in weighing nanometer effects is falling supply voltages and how they combine with IR-drop issues to create a difficult design scenario. In the 250-nm era, supply voltages were in the 3-V range and transistors' threshold voltage (Vt) was around 0.25 to 0.33 V. Designers had plenty of supply-voltage headroom to work with. But in nanometer processes, gate oxides are perilously thin, leading to greater leakage currents and the potential for outright failure if supply voltages aren't held down. At the same time, IR drops are rising. The circuit that tolerated a 0.1-V supply-voltage variation at 3 V will fail if the same variation occurs at 1.2 V. As supply voltages fall, path delays increase much more dramatically in a 130-nm process than they do in a 250-nm process.
Cross-coupling between signal nets is greatly magnified at nanometer geometries. At process geometries of 250 nm and higher, most wire capacitance results from coupling to ground and is highly predictable based on wire lengths. Global routing can predict these wire lengths based on placement, making timing predictions a fairly straightforward process.
It all changes with process shrinks, as the bulk of wire capacitance shifts from ground to neighboring wires. The wires, which were flatter and broader at 250 nm, become taller and narrower at 130 nm and below. Plus, they're closer together, presenting more area to other wires than to ground. This leads to a huge increase in area-infringed capacitance. Crosstalk can induce substantial variation in signal delays (Fig. 2). At 180 nm, such capacitive delays pose a significant problem for high-performance designs. But at 90 nm, they cause a problem for all designs regardless of performance. With capacitance no longer predictable as a function of wire lengths, much more routing detail is required if designers hope to reach timing closure.
With all of these effects conspiring to make life difficult, it's surprising that many of today's EDA tool flows still rely on methodologies that were developed with older silicon processes in mind. Most timing analysis tools operate in the static domain and/or at the gate level. They can miss many nanometer-related signal-integrity problems due to delay calculations based on simplistic models of lumped capacitances. Reliance on inaccurate models that don't reflect physical reality, but rather electrical assumptions, make for an error-strewn road to timing predictions. The alternative for many of today's methodologies involves guardbanding and over-design, with increased timing margins and over-constraint of synthesis and place and route, and/or conservative physical design. Either approach sacrifices performance and defeats the purpose of using advanced silicon processes.
More important than tools evolving to cope with nanometer effects is the evolution of methodologies. One can liken what's happening today to the shift that occurred some years ago. Designers were forced to abandon statistical wireload models for the greater accuracy of physical synthesis. Through placement-based timing optimization, designers made that move, enabling them to bring more physical information higher into the design flow. The alternative was to wait until after placement and routing were completed before post-layout analysis of signal integrity could ensue. That led to many iterations back through synthesis and place and route in search of timing closure.
Now another shift must occur, one in which placement-based optimization gives way to routing-based schemes. In nanometer geometries, designers simply can't know enough about the wire-to-wire interactions from placement alone. Silicon virtual prototyping holds a good deal of promise as a methodology shift that brings much more physical information to points prior to final routing. This minimizes the number of iterations back through synthesis and place and route (see "Suppress IC Development Costs With Prototyping," this issue, p. 84). For those issues that elude prototyping, new generations of post-layout analysis tools are zeroing in on IR-drop problems and simultaneous switching issues. These tools can help isolate true critical paths for transistor-level analysis where needed.
But in the nanometer age, it won't be enough to understand routing before one can close timing. The nanometer processes themselves bring optical effects to bear that, in essence, lead to a "what you draw ain't what you get" scenario. EDA vendors have ambitious test-chip programs under way to examine test chips fabricated on 65-nm processes to get a handle on the optical distortion factor, model that distortion accurately, and build into their tools the facility to compensate for it.
Methodologies must shift as new problems are uncovered in 90- and 65-nm processes. The well understood issues can be handled through virtual prototyping. Problems that are less well understood must be gently dealt with through judicious margining, with care being taken not to compromise performance. As these latter issues become better known and modeled, the margins can be tightened up or eliminated. Ultimately, the goal is to avoid problems early in the cycle, rather than attempt to analyze them out later.
Understanding the problems is the key. If you never understood the problems when doing the design, you may not be able to fix them later. Hopefully, evolving methodologies will help designers avoid such conundrums by fostering correct-by construction techniques that will also aid in understanding nanometer physical issues.