Electronic Design

Taking DFM To The Heart Of The Design Flow

DFM enhancers that operate on post-layout GDSII files are no longer viable for 65-nm designs. The way to reach acceptable performance and yield goals is to make the entire design flow aware of DFM. By building DFM awareness into cell characterization, implementation, analysis and optimization, and sign-off, manufacturability issues are understood and addressed at the most appropriate and efficient steps within the design flow.

At 65 nm, DFM is even more important thanks to the shrinkage of critical dimensions of on-chip structures. A given set of absolute physical variations can result in relatively large electrical variations. Although the yield loss due to random particles is still important, the systematic manufacturability issues related to lithography and chemical-mechanical polishing (CMP) can dominate. In addition, the design needs to be made more robust and less sensitive to process variations to ensure an acceptable parametric yield across all manufacturing conditions.

At 65 nm, every structure in the design is affected by adjacent structures. If two geometric shapes are created in the GDSII file and photomask in isolation, these shapes will print in a certain way. But if the same shapes are juxtaposed, interference effects between these shapes modifies each of them, often in non-intuitive ways. The result can negatively impact timing, noise, power consumption, and, ultimately, yield.

To be truly DFM-aware, the design environment must address each of these problem areas by modeling the systematic and statistical effects rather than rely on a strictly rules-based approach. Even if the design tools follow all of the rules provided by the foundry, the chips can still show parametric (or even catastrophic) problems.

Thus, tools need some understanding or modeling of the lithography process, or they can't be accurate enough to ensure correct silicon behavior under all likely manufacturing conditions. A model-based, DFM-aware design flow implies more accurate analysis, which can be exploited during optimization for more speed, less leakage, or smaller die sizes.

The DFM-aware design flow starts with models that have been specifically characterized to support DFM-aware implementation. The characterization tools must turn a foundry's process data and models into efficient abstract models that account for process variations and lithographic effects. These abstract models can then be used within the implementation tool, knowing that both the systematic (deterministic) and the statistical (random) contributions to timing, power, and yield are accurately represented.

For example, by knowing the delay or leakage sensitivity of each standard cell, the implementation tool can optimize critical timing paths by avoiding such cells or by altering their placement to minimize such sensitivity.

The routing engine needs to be lithography-aware. Only then can it identify patterns that must be avoided, as well as locations at which the layout must be modified to bypass hotspots that downstream resolution-enhancement techniques (RET) cannot fix. Combining lithographic-aware placement and routing helps minimize the need for post-layout RET and increases the effectiveness of any RET that's required.

Static-timing-analysis (STA) engines assume the worst-case delays for the different paths. STA assumes, for instance, that all delays forming a particular path are minimum or maximum, which is both unrealistic and pessimistic. In contrast, a DFM-aware design environment would gain accuracy through use of statistical-based STA, or SSTA.

But DFM-aware analysis is of limited use without DFM-aware optimization. For timing optimization that's aware of the variability, for example, the SSTA engine must account for sensitivity and criticality. Consider an example of two timing PDF (probability distribution function) curves (see the figure). Which of these is the more critical?

In traditional STA, the more critical path is the one that affects the circuit delay the most; it's the one with the most negative slack. In DFMaware SSTA, the most critical path is the one with the highest probability of greater effect on the circuit delay. Therefore, the SSTA optimizations must be based on the paths most likely to cause problems.

In addition to timing analysis and optimization, all other analysis and optimization engines (leakage power, noise, and yield) must also use statistical techniques that account for process variability. This makes the design more robust and less sensitive to variations, maximizing the yield throughout the life of the device.

The tool chain must provide DFM-aware sign-off verification. The verification engines have to analyze and verify the design for process variations and lithographic effects for timing, power, noise, and yield. But because many manufacturability issues are difficult to encode as hard-and-fast rules, the physical verification environment needs to accommodate model-based solutions.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.