When a new semiconductor process node first comes online, the scenario is always the same: Fundamental challenges posed by the complexity of the process threaten its widespread adoption. Yet somehow, these challenges are met, the process node achieves mainstream acceptance, and the cycle repeats. Overcoming the adoption barriers usually involves a combination of process maturation and advances in design tools (e.g., better simulation models, more accurate crosstalk calculations, etc.).
This scenario is unfolding again today. But this time, an important new trend has emerged at 130 nm—model-based integration. Localized process improvements and point-tool advances are no longer sufficient to conquer adoption barriers. Instead, previously disjointed portions of the design and manufacturing process must be integrated through robust, accurate models to facilitate progress.
Consider the recent integration of synthesis and place-and-route. The requirement for accurate timing closure created a situation where static wireload models were no longer accurate enough. Instead, actual routing traces, based on dynamic routing models, were required. This need led to the integration of the synthesis and place-and-route functions, resulting in a dynamic, iterative environment that ensures timing closure.
This same integration imperative continues at 90 nm and below, yet the scope of integration is now wider. Localized integration in the design and manufacturing process isn’t enough. Up to 90 nm, static, rules-based information formed the interface between design and manufacturing. Design rules, transistor parameters, and extraction parameters are all examples of such rule-based information.
Just as dynamic, model-based routing information was mandatory to ensure timing closure, designers now need dynamic, model-based information about the semiconductor process in the sub-100-nm era. Without this information, it will be impossible to unlock the true potential of advanced process nodes in terms of yield, performance, predictability, and reliability.
If you doubt the need for model-based integration, consider the exploding number of design rules that exist at 90 nm and below. This proliferation of design rules is a by-product of the fact that a static tool, the design-rule checker, is trying to address the need for dynamic information, such as layout-dependent printability issues resulting from optical proximity effects. A design-rule checker isn’t the optimal tool to address this problem. This challenge calls for a process-calibrated lithography simulator.What can designers do to address these challenges of design-for-manufacturability (DFM)? A good start is to require accurate process models from fabs, and insist that the resultant models be well integrated in the design flow. Designers should not accept static representations of a dynamic environment. If designers consider lithography effects and process variation early in the design process, they’re far less likely to be surprised by low yield, poor leakage, or degraded reliability.