In Verification, Physics Intrudes Forcefully At 28 nm

Dec. 21, 2010
EDA tools that forge much tighter ties between place and route and physical verification will be required. On the functional side, far more attention will be paid to hardware/software co-verification than ever before.

Design-rule checking

Physical verification

Quartz DRC

With the ramp-up of 28-nm fabrication processes, system-on-a-chip (SoC) design teams are busily prepping chips that will cram more functionality into the same silicon real estate. But as with each process shrink that has come before it, the 28-nm process node will put increased pressure on both functionality and physical verification teams.

The greater gate density for a given silicon area is only part of the equation, and it mostly impacts functional verification. However, when it comes to physical verification at 28 nm, the number and complexity of the foundries’ design rules pose the most formidable challenge.

EDA vendors very closely track tapeouts across the industry at emerging process nodes. It’s their business to do so. According to one major vendor’s estimate, about 100 designs are in the pipeline at 28 nm, and roughly half of them have taped out.

With that measure of experience at the 28-nm node, the biggest trend that is taking shape in physical verification is the integration of rule checking with the placement and routing of the chip.

“I don’t remember us talking about physical verification nearly as much at 65 nm as we are now at 28 nm,” says Saleem Haider, senior director of marketing for physical design and design for manufacturability (DFM) at Synopsys.

Getting physical
The number of physical design rules has increased significantly since the 65-nm node. At 40 nm, foundry runsets totaled fewer than 1000 rules to be checked. At 28 nm, the number of rules has exploded to anywhere from 1500 to 1800 rules. Thus, according to some industry estimates, physical verification (Fig. 1) run times at 28 nm are almost four times longer than they were at 65 nm.

And while there are many more rules at 28 nm than there were at 65 nm, an even bigger issue is the increased complexity of the 28-nm rules. At 65 nm, design rules were predominantly two-dimensional. Manufacturing compliance in a given fabrication line was mostly a matter of ensuring that on-chip structures were far enough apart.

The same held true for routing lines. Keeping them a given distance from each other would be enough to keep crosstalk and other signal-integrity issues at bay.

But at 28 nm, that’s no longer enough. The laws of physics loom larger than ever, and it’s now reaching the point where design teams can no longer assume that logical design and physical design/verification are separable problems.

Just as logic synthesis eventually had to become more physically aware back at the 90-nm and 65-nm nodes, now placement and routing must look ahead to the next phase of the implementation process and anticipate what takes place in physical verification.

Continue on next page

At those larger process nodes, physical verification was a post-design issue. After placing and routing the chip, design teams would satisfy themselves that timing, power, and routability closure were achieved. Design-rule checking (DRC) and layout-versus-schematic (LVS) checks were the final steps before taping out the chip.

That methodology worked well enough at 65 nm. But at the smaller nodes, that final signoff checking for physical issues will begin to unveil unhappy surprises. The trend toward melding of place-and-route with physical verification has already begun and will accelerate in 2011.

In the past, the routed design would be “tossed over the wall” to the physical verification team, but this would result in an iterative process. Going forward, the router and physical verification tools will be tightly integrated. This calls for a solid place-and-route platform with good manufacturing fidelity built in, as well as a strong physical verification tool that is foundry endorsed and certified.

One example of this melding is the Synopsys IC Validator, which performs what the company calls “in-design” verification. “Engineers can do physical verification checks during the design process,” says Haider.

Physical problems can be caught then rather than waiting for those signoff checks just prior to tapeout. During design, it’s easier to implement a fix that maintains timing (Fig. 2) than to do so later. It’s highly likely that flows like this will become mainstream in the coming year for these advanced subnanometer designs.

Verification efficiency is paramount
To some, the 28-nm node doesn’t represent a revolutionary problem for physical verification, but one that is essentially bound up in the complexity of the checks. So even as they continue to merge physical verification with placement and routing, EDA vendors will emphasize the efficiency of their tools.

“We focus on throughput and the effective use of available compute resources,” says Jonathan White, senior director of product engineering at Magma Design Automation. “It’s crucial to have an architecture that is networked and aware of compute resources.”

More EDA tools will be able to track how much disk space and RAM is available and what kind of resource-management software is being used. Designers would prefer to be unconcerned with these kinds of tedious chores.

Magma’s Quartz DRC and Quartz LVS are examples of where physical verification tools will be heading in the name of efficiency. With runtimes increasing so dramatically at 28 nm, verification teams are being forced to spend more on tool licenses to run multiple copies on multi-CPU machine farms.

Both Quartz DRC and Quartz LVS use a pipelined architecture so Linux compute farms and multicore CPUs can be harnessed to perform massively distributed processing (Fig. 3). Using this architecture, the physical verification problem is automatically split into smaller, independent tasks, making it easy to keep a farm of Linux machines busy. This fine-grain parallelism is superior to the crude parallelism available with legacy tools. It also enables linear scaling well past other tools.

Continue on next page

Another aspect of physical verification that is to be addressed in the coming year is the coding burden that comes with the foundries’ DRC runsets. Some verification teams will use the runsets off-the-shelf as provided by the foundries, but others prefer to modify them. That coding job gets passed down to the designers, because the foundry won’t do it.

Thus, there’s a need for an environment in which runset coding can be reused, and two efforts are afoot to accomplish it. The Taiwan Semiconductor Manufacturing Co. (TSMC) is spearheading one of these efforts in the form of an interoperable data format for DRC and LVS.

The foundry rolled this TCL-based concept out for its 40-nm process in 2009 and has seen buy-in from the major RTL-to-GDSII EDA providers. Look for the iDRC and iLVS data formats to be extended to more advanced process nodes as well.

The Open DFM language from the Silicon Integration Initiative (Si2) consortium embodies the second effort to standardize runset coding. A 1.0 specification of the language was released in November and is a free download at www.si2.org. A 1.1 spec is already in the works.

The language purports to provide a much more compact notation for DRC rules that can reduce rule volume by anywhere from five to 20 times. Texas Instruments is a believer in the Open DFM effort and has indicated its intention to adopt the language in production. The language should see quick adoption by EDA vendors and semiconductor manufacturers in 2011.

Functional verification trends
While the 28-nm node certainly will have impact in physical verification, it will also make itself felt in the functional-verification realm. The larger designs that 28 nm make possible create a need for more comprehensive functional verification, and at the same time, a need for greater capacity and more performance.

As SoCs have morphed into multicore affairs, they have also steadily grown more software-centric. This has led to increased urgency to embark upon hardware/software co-verification earlier in the design cycle.

The trend toward the adoption of virtual prototyping is exacerbated even further by the desire of SoC developers to start their efforts from a base platform from which they can quickly churn out derivative designs. Thus, 2011 will see a huge push for adoption of virtual prototyping technology and FPGA-based emulation/acceleration.

By running a virtual prototype of a hardware design on an FPGA-based platform, designers can greatly accelerate the software development effort, getting started on it earlier rather than waiting for silicon prototypes of the hardware. The virtual prototype is meant to accelerate the software design part of process, not the traditional hardware part.

There’s another side benefit of virtual prototyping in what David Park, director of marketing for system-to-silicon verification solutions at Synopsys, calls “software-driven verification.” The thinking behind this is that most functional verification is accomplished using constrained-random and directed test.

“The nice thing about the virtual platform is that you can use it to run the system-level testbench on the final product. This can be used as parallel path to exercise the design in RTL,” Park says.

Continue on next page

As a result, that system-level testbench becomes more of a real-world test environment for hardware instead of trying to find every corner case. This can be helpful in achieving greater verification coverage and confidence that the design will work as expected when you get to physical silicon.

The notion of platform-based design leads naturally to the topic of reuse methodologies, the adoption of which is a trend that continues to pick up steam. Platform-based design, and the earlier start on software that it affords, has two huge benefits.

For one, it gives design teams more time to create differentiation in software for their end products. For another, it helps avoid catastrophic hardware/software integration issues when you’re so close to market you can taste it, but fixing these problems costs about as much as it can cost.

Emphasis on emulation growing
Virtual prototyping and hardware/software co-verification is taking on a greater role in most verification methodologies. The pressure for faster time-to-market in first-pass design success now requires hardware acceleration of both the design and the test bench for more and more applications.

According to Mentor Graphics CEO Wally Rhines, some of Mentor’s advanced customers believe that emulation will gradually become a larger and larger portion of verification and gradually replace simulation, although never totally replace it. Rhines believes that the future of verification is in a combined approach, with simulation being used for some types of design exploration but quickly moving on to a full emulation-based strategy.

“Simulation is running out of steam even before 28 nm,” says Lauro Rizzatti, vice president of marketing at EVE-USA. “This has to do with the node only indirectly, because with smaller geometries you get more gates in the same die. Simulation does no good in verifying operating systems, drivers, protocols. You need the acceleration hardware underneath to speed up the whole thing.”

Emulation is one convenient way to address the problem of verifying multi-CPU SoC designs. It also is helpful in terms of doing design exploration in terms of partitioning hardware and software. How many cores should you design in? How large should the caches be for those processors?

These tasks are unwieldy at best for RTL simulation on its own. In addition, design teams want to boot an operating system on their prototype hardware as soon as possible. “The only way to do that in a reasonable amount of time is in an environment where acceleration plays a major role,” says Rizzatti.

Verification reuse is on the rise
Reuse methodologies become something tangible in the standards arena. Solving large intellectual property (IP) integration issues requires a robust methodology that spans the industry, not just at one company. Both implementation IP and verification IP, software drivers, operating systems, and other software elements are coming to integrators from multiple sources. There is a need for a common base platform for the integration of all these elements.

Continue on next page

This is where efforts such as IP-XACT and the Universal Verification Methodology come into play. IP-XACT began life under the Spirit Consortium, which has since been merged into Accellera. It has been transferred to the IEEE and was ratified as IEEE 1685. This standard describes an XML schema for metadata documenting IP. Also, it includes an application programming interface (API) to provide tool access to the metadata.

Freely downloadable at http://standards.ieee.org/getieee/1685/index.html, the standard is gaining stature as one that makes IP reuse and interoperability much easier and will continue its path toward broad industry adoption in 2011.

Still under the purview of Accellera is the Universal Verification Methodology (UVM), a standards effort that is aimed at fostering universal interoperability of verification IP. The UVM is an outgrowth of the Open Verification Methodology (OVM) from Cadence and Mentor Graphics and the Verification Methodology Manual (VMM) from Synopsys.

Now available for download in an early-adopter 1.0 version (www.accellera.org/activities/vip), the UVM will improve interoperability and reduce the cost of repurchasing and rewriting IP for each new project or EDA tool, as well as make it easier to reuse verification components. Overall, the VIP standardization effort will lower verification costs and improve design quality throughout the industry. The UVM is yet another standard that will see further development and adoption in coming months and years.

What’s the plan?
In addition to the growth in verification reuse, there is an upsurge in verification planning throughout the industry. Efforts such as the Common Power Format (CPF) have emerged to give design and verification teams a way to convey power intent throughout the design cycle.

“We talk a lot about unifying design intent. That does affect the verification side as well,” says Tom Anderson, group director of verification product management at Cadence Design Systems. “If someone has a power intent file in CPF, it’s important that we can use that file in the verification side.”

Verification intent is represented in an online verification plan. The notion of a verification plan isn’t new, but the definition of that plan is changing. Verification plans at one time only were concerned with coverage. Now verification plans include goals for formal verification, goals for hardware verification with emulation and acceleration, and low-power intent. Watch for further growth in the scope of verification plans in the future, including some movement into the analog space.

The verification plan also comes into play when one speaks of verification convergence, which is to say that you’re comfortable enough with your verification effort that you can move toward tapeout of the chip. A verification plan enables the team to collect metrics from the various verification engines and check their progress against the original plan.

About the Author

David Maliniak | MWRF Executive Editor

In his long career in the B2B electronics-industry media, David Maliniak has held editorial roles as both generalist and specialist. As Components Editor and, later, as Editor in Chief of EE Product News, David gained breadth of experience in covering the industry at large. In serving as EDA/Test and Measurement Technology Editor at Electronic Design, he developed deep insight into those complex areas of technology. Most recently, David worked in technical marketing communications at Teledyne LeCroy. David earned a B.A. in journalism at New York University.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!