Electronic Design
What Does Design Rule Signoff Really Mean—And When Should I Care?

What Does Design Rule Signoff Really Mean—And When Should I Care?

Each time the semiconductor industry moves to a new technology node, everybody needs a new set of design rules to ensure IC manufacturability at these smaller dimensions using different processes and techniques. Now, obviously, we don’t start from scratch each time. If we did, we’d all still be carrying cell phones the size of a small suitcase, and there would be a lot more engineers jumping off bridges.

But how do you determine when the design rules are “complete” for each new node and process? How do you know if you’re getting accurate verification results that match the target foundry? And when should you care?

How Does A Foundry Define Signoff For New Processes?

When a foundry begins developing a new process, it quite logically starts with the rule decks that worked for the last process. Not only does this save time and resources, but those decks also have been proven accurate on hundreds, if not thousands, of production designs. Based on history, a foundry knows that most of the rules and conditions from the previous node will apply to the new node. The development work, then, can concentrate on the new issues.

Foundries are always under enormous pressure to release the next node as quickly as possible. The fastest way for them to do that is to focus on an initial toolset during the early stages of process development. They choose that tool based on two general factors.

First, they consider the breadth and depth of technology the tool offers. For obvious reasons, they want a tool that already handles all the known issues identified in previous nodes.

Second, they rely on their previous experience with the tool provider. They need to be able to work with the tool provider to analyze new issues and, if necessary, develop new technology to resolve them, so they consider which tool providers have a proven record of delivering innovative solutions.

Once they have selected the tool they will use for process development, they begin the activities that will ultimately determine what “signoff” means for that process node.

The foundry identifies and describes the design constraints required by the new process and creates a regression test suite to measure that the physical verification decks such as design rule checking (DRC) and double patterning, both at initial release and over time, will produce the expected results for each physical verification check. 

This is done by iteratively creating test designs, running existing design rules against the known constraints, identifying and analyzing errors, updating the design constraints, creating new test designs using those constraints and the rule deck, and so on until results are obtained that the process owners determine are acceptable for initial production. This development may also include the generation and analysis of silicon. The output of these efforts is a DRC deck running on the specific DRC tool used during development that defines what the foundry or fab will accept as signoff quality. 

Once the design rule coverage and margins are defined in terms of rule decks developed for that initial signoff tool, all other physical verification tools must be able to match those results to be certified for that process node.

Initially, everyone compares their results against the regression test, which is a fairly easy procedure. However, test structures can never completely capture every condition that will be seen when many companies with different design styles release their designs to the fab.

As a result, there will be ongoing changes to the design rule manual and coding changes for the signoff tool throughout the lifespan of a process node. These changes ripple through the rest of the ecosystem each time they occur. Moreover, the changes do not simply involve design rule specifications, but also how the rules are implemented, which involves subtle and intricate operational behaviors determined by the detailed rule coding, which is different for each physical verification platform.

For early adopters of the new process node, the delay resulting from this ripple effect is not acceptable, so they invariably use the same tool their foundry is using to develop and validate the evolving design rules.

When Should I Care About Signoff Quality During Design?

But if all the physical verification tools get to the same point (certification) eventually, why should it matter which one I use if I am not an early adopter? The answer to this lies in the growing complexity of the design rules themselves at each successive node.

Place and route (P&R) groups and custom designer/analog designers are accustomed to using the lightweight DRC tools supplied with most physical implementation tools. The design tool vendors designed them to ensure that most of the design rules are satisfied as the design is being created, leaving only a few violations to fix when they are discovered by the signoff verification tool.

That’s how it worked in the past. But at advanced process nodes, both the complexity of new rules and the interactions between complex rules mean that the built-in “lite” rule checkers are leaving more and more violations to fix at signoff.  

At some point, the iterations between layout and signoff become untenable in terms of complexity and project schedule slip. To address this challenge, designers need to use a signoff-quality DRC engine that can provide full coverage of the design rules from the earliest stages of the design flow to ensure their designs are as close to “correct by construction” as possible.

Supply-Chain Complexities

This process development model has been in place essentially forever. So why bring it up now? In 2011, the typical top 30 fabless/fablite company is a complex animal. It obtains intellectual property (IP) for its designs from both internal and external suppliers, does both custom and automated (P&R) design implementations, has design teams spread across the world, and tapes out designs to multiple foundries. Across such a complex ecosystem, it can be critical to minimize the aggregate time to production while still ensuring high-quality products. 

If, for example, the system-on-a-chip (SoC) team brings in lower-level IP blocks from an external supplier that were validated with one physical verification tool, but the P&R team is using a different physical verification tool, there could be discrepancies that will require additional resolution work and potentially delay tapeout.

These variations can also become the source of protracted discussions between the foundry, the IP vendors, and their customers. What is a true design rule violation, and what is merely a minor, but acceptable, disparity in results? By choosing to use the same DRC engine throughout their supply chain, companies can minimize schedule delays and avoid increased workloads caused by inconsistent physical verification results.

Conclusion

The physical verification process has become continuously more complex and difficult as scaling continues and manufacturing variation becomes more and more significant, requiring more sophisticated design rules and checks. In effect, physical verification is the link between the abstracted world of logical design and the real world of manufacturing.

My advice? Invest in the most comprehensive verification tools available, standardize your solution across design teams and IP suppliers, and apply signoff-quality verification at all stages of your design creation process.

Driving convergence to signoff manufacturing requirements throughout your development process will pay handsome dividends in more predictable time-to-market, regardless of process maturity, design styles, or ecosystem structure.

TAGS: Digital ICs
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish