Electronic Design

Neglecting IR Drop In Nanometer Designs Leads To Silicon Failure

In the move to higher clock rates and advanced process technologies, designers find increasing silicon failure in designs that appear to pass verification signoff. Today, many problems can be traced to complex electrical and physical effects that arise with process technologies at 130 nm and below. One of the most insidious sources of nanometer design failure lies in the familiar, and very basic, Ohm's Law.

During circuit operation, large nanometer designs can exhibit dynamic-power net deviations due to IR drop that can significantly erode transistor supply voltage, VDD. Variations in VDD alter transistor performance characteristics, often causing unpredictable circuit response and overall design failure. Designers can turn to new methods for dynamic IR-drop analysis that help them account for this pervasive problem.

Circuits that perform as expected with older process technologies begin to fail when using nanometer technologies. Below 130 nm, designers lower supply voltages to manage power requirements and deal with the thinner oxides at these geometries. The combination of reduced supply voltages and on-chip supply variation due to IR drop or L(dI/dt) degrades available headroom in these circuits. Reduced VDD at the transistor contacts causes nonlinear performance degradation that becomes more pronounced at finer geometries.

For example, IR drop increases path delays by around 15% in a 0.25-µm circuit. But in a 1-V, 130-nm circuit, IR drop will increase path delays by about 55%. So levels of IR drop that can be ignored in older designs can exceed the remaining headroom in nanometer circuits. Thus, a 100-mV variation that wouldn't affect a 3.3-V circuit can induce catastrophic failures in a circuit designed for a 1- to 1.2-V supply.

As design rules shrink and circuits grow, designers find traditional verification methods less effective for identifying IR-drop effects. Traditional gate-level verification methods abstract away the transistor-level details necessary for dynamic IR-drop analysis. Further, older methods once used to estimate static IR drop in simpler circuits can't handle today's large designs, much less the massive parasitic data required to uncover subtle nanometer effects.

As a result, circuits too often pass traditional verification checks, yet fail in silicon, forcing design teams to turn to costly diagnostic and regulation methods such as electron beam (EBEAM) and focused-ion beam (FIB) studies. Aside from the inherent cost of these methods, the associated delays and mask costs for silicon re-spins can be debilitating in today's market environment. Anticipating these problems, many companies have used preventive methods like increased timing margins or physical guardbanding. Also, design teams find themselves using additional metal layers or even flip-chip technology to mitigate IR-drop problems, but at significantly increased costs for design and manufacturing. Applied in wholesale fashion, such global preventative measures lead to larger, costlier die and suboptimal performance in final silicon.

Consequently, designers are turning to emerging post-layout analytical methods that can predict IR-drop effects on large designs, such as memories and systems-on-a-chip. At the heart of these new approaches, advances in parasitic reduction and hierarchical back-annotation let designers manage massive data sets associated with design and parasitic data. While traditional methods can't account for global interconnect and interactions that reach across the full design, these newer techniques allow engineers to complete the full-chip verification analysis needed to reliably predict nanometer effects and determine dynamic IR drop in large nanometer designs.

The availability of improved methods for IR-drop prediction in complex designs is important not only in helping design teams avoid silicon failures, but also in permitting them to deliver optimized designs. Preventive measures like guardbanding or use of additional metal layers force designers to achieve reliable results at the cost of compromises in performance, die size, cost, or ease of implementation.

Instead of smaller, faster, less-expensive chips, companies employing traditional methods are left with larger die, increased costs, and designs that can't exploit the potential advantages of nanometer technologies. Improved analytical approaches let designers focus preventive measures specifically on circuit elements that truly need them. With these methods, design teams can move to tape-out with increased confidence that actual silicon results will match predicted performance. Through more reliable prediction of dynamic IR drop, companies can eliminate costly silicon re-spins and achieve faster time-to-market of smaller, faster, more profitable semiconductor products that exploit the full potential of nanometer technologies.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.