The Shifting Sands Of DSM Characterization

Nov. 5, 2001
With interconnect delays dominating, deep-submicron characterization is a highly complex challenge requiring simultaneous analyses of many parameters.

At one time, IC design was, if not easy, then at least somewhat simpler. But as the world's semiconductor foundries plunge deeper into the submicron realm, designers face exploding complexity.

Issues such as signal integrity and timing closure are made significantly more difficult by the tightening proximities of interconnects, and by dynamic effects like IR drop and simultaneous switching, all of which are increasingly interdependent at submicron geometries. Also, parasitic extraction has been complicated by a growing need to consider the inductive component. The result is an extremely complex set of parameters that must be analyzed in concert.

Successful design at 0.13 microns (µm) and below de-pends in large part on understanding the physical effects, and on a design methodology that accurately models active devices and interconnects. Not only must models be accurate, but the base assumptions made regarding physical effects must be consistent through a hierarchy of abstraction that incrementally adds physical detail.

Not too long ago, IC designers were able to assume that almost all delays in gate-level simulations occurred in the gates themselves. The interconnects, or wires, between the gates were almost entirely disregarded. Designs were approached as if the architecture required to meet timing, power, and area constraints could be determined by looking only at the gates themselves.

But as design rules fell below 1 µm, that scenario changed rapidly. The crossover point at which interconnect delay caught up with gate delay occurred at 0.25 µm (Fig. 1). At geometries smaller than that, interconnect begins to dominate as the leading source of circuit delay. That's because the aspect ratio of the wires has changed. "They're taller and have more sidewall capacitance," explained Rajit Chandra, PhD and vice president of technology at Magma Design Automation. As a result, the electrical activities in the neighboring wires affect the signal propagation times on any critical wire.

"If you're starting from register transfer level (RTL) to create a circuit that will meet all of your constraints, and trying various combinations of multipliers, adders, and shifters while basing that on a poor interconnect model, it's a waste of time," says Tom Ferry, vice president of marketing for physical synthesis at Synopsys. Why?

The answer is that the fundamental paradigm of assumptions has changed. The statistical wireload models used at larger geometries, which assumed the entire delay was in the gate, are no longer valid. Instead, designers must delve much deeper into the physical realities of the circuits, accounting for many interdependent variables (see "Interconnect Along The Implementation Trail," p. 58).

The nature of the models themselves can be an issue. Now the old paradigm's statistical wireload models, which assume an average capacitance for each load, are outmoded.

"When you didn't have to worry about wires, you could do a logical gate-level optimization, just in terms of fanout and levels of logic, and get pretty good results," says Noel Strader of corporate technical marketing at Avant! Corp. But using a statistical average for loads across an entire library of models will no longer work. Say that for a gate fanout of three, assume an average load of 5 pF, and that as you're performing gate optimization, you're using that 5-pF load in your calculations. Some of those loads may be significantly higher or lower, depending on actual routing. Using these averages could result in a setback in achieving timing closure.

Estimates on where wireload models begin to break down vary somewhat. According to Steve Carlson, director of marketing at Get2Chip, statistical models don't work well for design blocks of more than 20 to 50 kgates. "For the typical design with 1 to 2 million gates and a 200-MHz clock, I would say that's the range in which you start to see problems. And certainly above that, you see problems," he says.

Overcompensating for poor models can produce a range of downstream problems. "When you overestimate or make your timing constraints more stringent than they really need to be on your critical paths, you're going to compensate by sizing up all the drivers," says Magma's Chandra. "Then noncritical nets will seem much weaker in their driving strengths than the critical ones," he explains.

The result is that the very root cause for signal-integrity problems is introduced. Critical nets will have very sharp rising and falling edge rates, which will interfere with the signals on noncritical nets and create noise problems. "This is a classic case in which results become nonconvergent when using statistical modeling methods," Chandra says.

So how should one approach deep-submicron characterization to avoid such pitfalls? As with most kinds of modeling, interconnect characterization at what some call ultra-deep submicron (UDSM) levels is carried out via methodologies that use various forms of successive approximation. Early in the process, interconnects are modeled at a relatively high abstraction level in the form of statistical models with relatively little physical information. But as the process moves down the chain from logical design to physical design, and finally toward implementation, there needs to be progressively less abstraction and, conversely, more real physical information taken into account.

"The key to actually achieving timing closure is having approximations that are consistent throughout the entire flow," says Callan Carpenter, president and CEO of Silicon Metrics. "It's not enough to just come up with an answer. You might obtain one answer during one part of the flow and a second answer during a second part of the flow, if the assumptions differ."

Silicon Metrics' approach to model consistency is predicated on the often overlooked fact that while interconnect delays may dominate overall, active devices still contribute to the delay of the circuit. Moreover, the interaction between interconnects and active devices must be considered. If you ignore this interaction by characterizing them independently from each other, then no matter how accurate your individual models are, you will skew your results.

The company's SiliconSmart TSO, or timing sign-off tool, lets users look at the characterizations of both active devices and interconnects in isolation first, and then in the circuit. "We're building an infrastructure that allows the simultaneous characterization of interconnects and active devices in the context of the design itself, not just a priori like a standard-cell library is characterized before you even build a chip," says John Croix, CTO at Silicon Metrics.

Further facilitating this approach is the company's SiliconSmart models. In traditional tool flows, each tool may have its own delay calculator, which can cause inconsistent design analysis and nonconvergent timing. The SiliconSmart open-model compiler (OMC) addresses this by creating models with centralized delay calculation contained within them. The models also account for various nonlinear effects such as IR drops and across-chip temperature variances that can skew modeling results. Eliminating "instance-specific" timing flaws can help reduce timing iterations and guardbanding (Fig. 2).

The importance of being sure that assumptions made at one level of abstraction carry through to others can't be overemphasized. One way to approach this is by using simpler models with higher-level tools, says Lou Scheffer, a Cadence Design Systems Fellow. "A synthesis tool, for example, can't easily handle mutual inductance for many reasons, both conceptual and practical," he says. "In general, the higher-level tool must think in terms of a budget for the lower-level modeling. The lower-level tool must ensure that this budget is met."

There's a place for all levels of modeling accuracy in the process, even the lumped-capacitance models that are weak in accuracy. "Early in the flow, it's fine to work with lumped-capacitance models," says Silicon's Croix. Using such models early on can speed up analysis and keep designers within the ballpark in terms of timing. As you move along, you continually bring more physical data into the modeling process. But as you arrive at analysis results that are signed off at each level of abstraction, they are passed as constraints to the next level of the design process. According to Croix, the difficulty comes when you have inconsistent models or inconsistent applications.

The notion of successive approximation, or building accuracy in characterizations at UDSM geometries, is a key part of a number of tool flows from numerous vendors. For example, one view of the design cycle might show generic wireload models being deployed for initial logic synthesis with custom wireload models taking their place in floorplanning (Fig. 3). The custom models are built up through iterations of place and route, followed by parasitic extraction and static timing analysis.

It's almost universally recognized that the former practice of speculating what will happen in the physical domain, while performing synthesis and trying to achieve timing closure, doesn't apply anymore. "You can't make guesses about coupling capacitance unless you have physical data, because you need to know how close the wires will be," says Magma's Chandra.

In its flow, Magma performs incremental extraction of resistances and capacitances during place and route, controlling the process so delays are adjusted along the way. "We have the system decide up front how it wants to apportion the delays, and then have it drive place and route, do the cell sizing, buffering, and wire sizing, so that you maintain that delay," says Bob Smith, vice president of marketing at Magma.

Likewise, Simplex Solutions' verification tools support insertion of physical data early in the flow. Vice president and cofounder of Simplex David Overhauser says some customers work toward very design-specific or even block-specific wireload models. "Some use our tools to build up more accurate wireload models for use in place and route or with other optimization tools," he notes.

Another approach to this solution uses what are known as virtual silicon prototyping tools. One such example is Silicon Perspective's First Encounter, which takes a full-chip approach to feeding physical information back into synthesis by exploiting a niche between synthesis and final routing. The tool performs floorplanning, placement, in-place optimization, timing closure, trial routing, and extraction, and includes its own timing engine.

"We're able to take the output of a synthesis tool and create this prototype," says Michel Courtoy, vice president of marketing. "We can build something that looks very much like the final design, although it's built much faster. It's close enough to the final design that people can make judgements and decisions about the de-sign's quality."

Courtoy contends that without knowing the layout and length of the wires, and the results of parasitic extraction, it's impossible to assign the right timing to each block within a chip. "It's a chicken-and-egg problem," he says. First Encounter endeavors to address it by building a full-chip virtual prototype that's close to the final results. Users can partition the prototype into functional blocks and assign timing to each. Once blocks are partitioned with the right timing budgets and pin assignments, it becomes easier to choose a methodology for block-level implementation.

Silicon Perspectives is looking to enhance its ties to final routing engines, and it recently entered into a partnership with Plato Design Systems. A relatively new startup in the routing arena, Plato is coupling its NanoRoute scalable routing engine with First Encounter to produce a flat capacity of 10 million gates (or 2.5 million cells) and a hierarchical capacity (for example, a 10-block SoC) of 100 million gates.

Beyond the need for an orderly approach to characterization that augments statistical models with physical information, there's an increasing need to expand the scope of parasitic extraction to include the inductive component. Carey Robertson, product marketing manager at the DSM division of Mentor Graphics, says the consensus is that inductance effects are important for circuits running at 500 MHz or faster. Some put the clock rate at which inductance really becomes an issue as high as 1 GHz. But it's a parameter that most agree should be on the radar screens of designers and tool vendors alike.

Inductance is often referred to as a 3D problem because it involves current loops, whereas capacitance is generally caused by neighboring interconnects. "An inductor might have a halo of influence an order of magnitude larger than a capacitor," Overhauser says. Therefore, extraction of inductance can be very tricky. Multiple layers of metal are a major component of the problem.

Compounding inductance effects is the move to semiconductor processes involving copper metallization. Copper has some unique characteristics with respect to inductance, and most specifically lower resistance. "If you're using copper, you have to deal with inductance sooner than if you're not," says Tom Ferry, vice president of marketing for physical synthesis at Synopsys.

According to Kevin Walsh, vice president of product management at Sequence Design, people will address inductance issues in a number of ways. "One is design methods and tools, and another is process changes," he says.

As Walsh points out, copper can't simply be laid down in a dielectric. Rather, it's placed in a titanium shell to prevent it from bleeding into the dielectric. This process change alters the characteristics of the copper from a modeling perspective. As a result, parasitic extraction has to be able to model those kinds of effects accurately, because it fundamentally changes the R, C, and Z characteristics of the device. This kind of analysis, Walsh notes, can't be done accurately until after place and route.

Overhauser goes so far as to say that with proper analysis, inductance can be effectively "designed out" as a problem. For a signal net, one can derive lengths of wires for which inductance may have an impact (Fig. 4). Very long wires are generally dominated by resistance, so inductance has little impact on their behavior, he says. Depending on the slew rate of the signals, inductance may or may not be an issue for shorter wires.

Not many inductance extraction tools are yet on the market. But Avant! Corp. weighed in earlier this year with an inductance option for its Star-RCXT parasitic extraction engine. Star-RCXT does detailed modeling of every capacitive and inductive interaction in a design, which in an average 1 million-gate design results in more than a billion capacitors and inductors. To reduce this amount of data, some design flows take short cuts by ignoring smaller values. But the cumulative effect of this can cause timing errors of 10% to 20%. Star-SimXT and Star-RCXT use sophisticated third-generation network-reduction algorithms to obtain fast and accurate results that take all parasitic values into account.

Would-be users of inductance extraction must realize that the ability to do a full-chip inductance extraction is only as useful as what's consuming the extraction downstream. "When you extract inductance, you increase all data by an order of magnitude. The practicality of building a flow with inductance extraction today is very difficult," Overhauser says. If the downstream tools can't process that data, you won't have a flow. The EDA industry has some work to do in establishing a tool flow that accounts for inductance.

Yet another pressing problem for characterization at UDSM levels is signal integrity. Some question the use of static timing analysis tools to investigate dynamic issues like crosstalk and the effects of IR drops and simultaneous switching on critical paths. "The static analysis of what by its very nature is a dynamic phenomenon is fraught with problems," says Simon Young, product line manager at Nassda Corp.

Static models, Young maintains, are by their very nature abstract models that simplify, or entirely omit, a number of factors that determine a gate's admittance. These factors can include transition time of the input signal, load-circuit characteristics, and driver-transistor sizes and topology. According to Young, these are complex and nonlinear relationships misrepresented by static models. For example, an analysis that ignores the source-to-drain resistance of a transistor can be up to 15% in error (Fig. 5).

For static timing analysis, as with many other forms of analyses, there are tradeoffs to make between accuracy and speed. "For a truly dynamic environment with the highest possible degree of accuracy, you want to use Spice, but there are so many potential waveforms or settings you can use that it's impossible to perform all those analyses in our lifetime and sign off on a chip," says Silicon Metrics' Croix. So to accommodate dynamic effects in a static environment, it's best to use two tools—static and dynamic—in tandem.

The best tradeoffs of speed and accuracy imply a kind of filtering, in which designers apply the dynamic algorithms where they're needed to leverage accuracy, but then intelligently meld that data into static analysis to exploit its speed. "We have some interesting techniques for melding those things together into a flow," Croix says. Silicon Metrics is exploring the possibilities for a combination of methodology and software infrastructure that will allow engineers to take advantage of just enough dynamic analysis to set up highly efficient large-scale static analysis. They're much closer to the dynamic result, but they're much faster than running Spice on the whole thing.

Mentor Graphics' Carey Robertson agrees that signal-integrity analysis will require more dynamic-analysis solutions, along with accurate parasitic models. Robertson cautions that analysis can only be as good as the data being fed into it. This implies accurate and detailed parasitic information that explicitly report critical coupling information for dynamic analysis.

"As we move into smaller geometries, specifically 0.15 µm and below, the dominant contributor to delay is becoming the capacitance between the wires," says Bijan Kiani, vice president of marketing for Synopsys' Nanometer Analysis and Test. Synopsys' PrimeTime SI tool delivers both static timing analysis capability and static crosstalk analysis capability in an integrated solution that purports to address both problems.

Static analysis tools are inherently pessimistic, and dynamic tools are inherently optimistic, says Cadence's Scheffer. "Because pessimism is preferable to optimism—ask anyone who's ever had to recall a chip—static tools will prevail. That said, there's a lot of research into making static tools less pessimistic, and increasing the range of problems that they can attack, such as static prediction of peak supply currents without simulation," he says.

Many experts advise designers to carefully examine their SoC designs and attempt to find the truly critical nets. Then, it's suggested only those nets be subjected to thorough transistor-level dynamic simulation using Spice.

However, Rajiv Maheshwary, Synopsys' director of marketing for static timing products, warns that the onset of dynamic effects, like crosstalk, greatly complicates the process of determining just which nets in a design are "critical." "The world is changing with dynamic effects, and the problem will become less manageable for designers," he says.

Even if all modeling issues are ironed out, the problem of correct translation of modeling into actual silicon re-mains. According to Dale Pollek, vice president of marketing at Celestry Design Technologies, there's a growing need for process calibration, so that tools will provide analysis results that match processes and aren't simply theoretically accurate.

Plus, vendors like Numerical Technologies provide tools for analyzing optical effects that can make the world's most accurate modeling and analysis a moot point thanks to distortions that occur in silicon processes. According to Numerical's vice president of marketing, Michael Sanie, work is ongoing with leading parasitic extraction vendors to model layouts based not on idealized situations, but rather on simulated silicon images.

Need More Information?
Avant! Corp.
(510) 413-8000

Cadence Design Systems
(408) 943-1234

Celestry Design Technologies Inc.
(408) 451-1210

(408) 501-9600

Magma Design Automation Inc.
(408) 864-2000

Mentor Graphics Corp.
(503) 685-7000

Nassda Corp.
(408) 562-9168

Numerical Technologies Inc.
(408) 919-1910

Plato Design Systems Inc.
(408) 436-8612

Sequence Design Inc.
(408) 961-2300

Silicon Metrics Corp.
(512) 651-1500

Silicon Perspective Corp.
(408) 327-0900

Simplex Solutions Inc.
(408) 617-6100

Synopsys Inc.
(877) 321-6063
About the Author

David Maliniak | MWRF Executive Editor

In his long career in the B2B electronics-industry media, David Maliniak has held editorial roles as both generalist and specialist. As Components Editor and, later, as Editor in Chief of EE Product News, David gained breadth of experience in covering the industry at large. In serving as EDA/Test and Measurement Technology Editor at Electronic Design, he developed deep insight into those complex areas of technology. Most recently, David worked in technical marketing communications at Teledyne LeCroy. David earned a B.A. in journalism at New York University.

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!