Electronicdesign 5543 Michaelwhite595x335

Are We Ready For Physical Verification Standards?

June 30, 2011
Everyone is busy, so we’d all like to find a way to do less work and achieve the same results, right? Theoretically, creating one physical verification (PV) syntax standard that all design rule checking (DRC) tools could read would save the foundries a tremendous amount of effort and time when developing DRC decks for each new node.

Everyone is busy, so we’d all like to find a way to do less work and achieve the same results, right? Theoretically, creating one physical verification (PV) syntax standard that all design rule checking (DRC) tools could read would save the foundries a tremendous amount of effort and time when developing DRC decks for each new node.

Such a standard would also provide fabless/fablite companies the flexibility to swap out any DRC tool and use design tools from different suppliers for custom layout or place and route (P&R). In this perfect world, you could do this without impacting your schedules or your design tapeouts, because the rule deck and checks for every DRC tool would be delivered at the same time, and they would all provide the same level of accuracy. Life would be good.

Over the years, attempts have been made to create a standard syntax that all DRC tools could read. More recently, TSMC, Mentor, and Synopsys co-created iDRC, and Si2 just announced the first official release of OpenDFM (based in part on the donation of the iDRC architecture). Let’s take a closer look at what it takes to gain adoption of a standard like OpenDFM.

The Reality

First, let’s consider some helpful background on how DRC decks and process development interact in a fab. At each new technology node, the foundry must develop and document the process. It also must identify and describe the design constraints that the new process requires.

This is done by iteratively creating test designs; running existing design rules against the known constraints; creating silicon; testing, identifying, and analyzing failures; updating the design constraints; creating new test designs using the new constraints and rule deck; and taking other steps until satisfactory production yields are obtained.

Since you want to perform the checks automatically, you need to be working with a DRC tool and an existing DRC deck as you develop the process. The output of this effort is a new DRC deck running on a specific DRC tool that defines what the foundry or fab will accept as “correct,” or signoff quality.

One might ask why you would use an iterative process with a single DRC tool/syntax. It’s simple—because the foundry or fab wants to get the most out of the new process node, and it can only get there through experimentation.

New process technology is a moving target. Each new node typically produces 20% to 30% more checks than the previous node, and many of them are not fully comprehended at the start. Trying to perform that experimentation with every available PV tool and syntax would be prohibitively time-consuming and most likely would result in delaying the release of the process.

You aren’t a foundry, so why should you care? The DRC tool and associated DRC deck/syntax combination used by the foundry in its process development sets the bar for acceptance of designs for manufacturing, or “signoff,” and using that same combination ensures you will get the same results as your foundry, especially early in the process ramp.

So if the foundry replaces a proprietary DRC syntax it currently uses for process development with OpenDFM (and assuming all EDA tools can read the syntax), we’re all set, right? No, and for sure not immediately. The foundries and fabs will have a very difficult time changing, and any change will be over a long period of time.

Why can’t they change faster? DRC decks consist of literally thousands of complex polygon processing operations developed and validated for a specific DRC tool. Each tool’s core polygon processing engine and operations are different, and some tools contain far more functionality than others. Most importantly, each vendor’s syntax is tied to its engine. This is the reason EDA vendors and the foundries only validate accurate results for that vendor’s tools running that vendor’s own syntax.

Adoption Issues

A standard presupposes agreement by everyone who uses it. Using a standardized deck to evaluate new technology is a contradiction from the start, because the purpose of process development is to discover and characterize new and changing conditions. That is very difficult if all you’re using is an existing set of standardized checks. There’s a good reason why standards aren’t usually implemented until a new technology is mature.

For the sake of argument, assume a foundry completes process development and creates a corresponding new set of design constraints. For these to be part of a single industry syntax standard for this process, we must now add the industry standard body approval cycle to the process development schedule.

In addition, the standards review would have to incorporate the software development lifecycle of all EDA vendors supporting the new design constraints. This is a common approach to standards today, but can it be successful with elements that are so close to the actual creation of advanced offerings?

DRC tool performance is one more obstacle that must be considered. You can try to force every vendor’s engine to enact the same syntax. Each will pay some sort of penalty for doing so, though, either in performance or memory/disk usage, or both, at least in the early stages of adoption.

Ultimately, the objective of process development is to make a new technology node operational for customers as quickly as possible. Moreover, development starts years before the process is actually released to customers. For this reason, leading process developers are already working with 14 nm and beyond. As such, they are already using their preferred DRC tool/syntax solution for those nodes.

So Where Do We Go?

Standardization isn’t a simple problem to solve. The efforts by Si2 and OpenDFM are moving forward using proven techniques for standardization. But what if we consider other avenues? Why not have the foundries all agree on a common set of rules and constraints for every node? 

In this manner, designers could not only swap out different DRC tools in the design flow, but also have the freedom to quickly swap wafer suppliers as well! Okay, that’s probably farfetched, since all the technical issues stated above would still come into play, in addition to the obvious business issues.

Why not just use the same DRC tool/syntax used by the foundries for process development with your custom design and place and route (P&R) flows, across different custom and P&R design teams, as well as across foundries, IP providers, and other organizations?

The foundry need only provide one rule deck to guarantee that the customer will obtain the same verification results. If customers can run the foundry’s signoff DRC within their preferred design/layout flows, and use any custom design or P&R tool, they can implement new process nodes quickly and confidently.

Naturally, from my position, this approach makes perfect sense, and Calibre has many customers doing this today! However, just as my first suggestion, this approach creates obvious business issues for other members of the ecosystem.

The industry may eventually achieve standardization for physical verification design rules. But the reality is that we’re still quite a ways away from achieving that goal, and designs still need to go out the door today.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!