DSM Design Drives The Need For EDA Tool Accuracy

Jan. 26, 1998
Changing Design Environments Requires Tighter Relationships Between EDA Tool Vendors, ASIC Vendors, And Tool Users.

In the ASIC industry, there's much discussion about design complexity, validation cycle time, tools, and overall design methodology. There are concerns that today's electronic design automation (EDA) tools cannot keep pace with the requirements placed upon them by growing design complexity. This gap is expected to widen further with each advancement in semiconductor technology that reduces feature size and allows for larger, faster, and more complex designs. Intellectual Property (IP), the Virtual Socket Interface (VSI), large megacells, cores, and a myriad of rapid design methodologies will only continue to put more pressure on these design tools.

Product life cycles and time-to-market are placing increased pressure on users to get designs done quickly. Design verification must be accurate since errors can cause delays and increase costs, causing the design to totally miss the market window. What's needed is a three-way partnership between users, ASIC vendors, and the EDA tool vendors. But the question is whether the EDA community is in sync with the leading technologies of today.

With system-on-a-chip and ASIC designs approaching 1 million gates, it is clear that the industry is not at rest. IP, VSI, and reusable logic will allow users to quickly put more complex functions together. But whether there are sufficient tools and methodologies in place to ensure successful designs and design verification is questionable. How well the industry provides these solutions will determine the overall success and growth potential of not only EDA tool vendors, but also ASIC vendors and tool users.

Design Styles Design styles directly influence design methodology and tools. Design styles for digital circuits can be fully synchronous, asynchronous, or a mix of both. In this article, fully synchronous is defined as requiring all direct action signals such as clocks, sets, and resets, to originate at the pads.

Some consider fully synchronous to include only a single master clock, but even in designs with a single master clock, glitches can occur in decoded set or reset signals feeding sequential gates due to timing differences. This can result in potential circuit malfunctions that could cause manufacturing yield losses or even nonfunctional silicon. These types of designs are considered semi-synchronous.

Asynchronous designs usually have multiple clocks running various sections of logic or signals that can arrive at random intervals that are common in communication circuits. Most companies can dictate the design styles, especially if the types of circuits they develop lend themselves to a particular style.

It has long been considered that analog designs are difficult and that developing a digital design simply meant defining logic functions and making sure timing requirements are met. This is no longer the case, since digital designs are beginning to take on more analog characteristics. It is becoming more important to have a closer link between Spice and validation processes, especially with designs that are not fully synchronous. As long as designers develop asynchronous circuits and design complexities continues to rise, verification accuracy will continue to be a major industry focus.

Power consumption also is becoming important, especially in battery-operated applications. This is accomplished by lowering the supply voltage in addition to disabling certain portions of logic during operation. Both methods of power conservation can cause side effects and require accurate timing verification. Shrinking process technologies also can reduce noise immunity since narrow glitches can be detected and passed on by other logic functions. Wire interconnect, once considered an insignificant part of the overall delay, is now significant. While this new technology has enabled designers to build larger, faster, and more complex designs, it also has compounded the problem of design verification and physical design.

As designs continue to push performance, increase in complexity, and the use of deep submicron technologies, several technical issues continue to arise. The capabilities of all EDA tools are being pushed to the limit, making design verification challenging. With standardization of the two IEEE HDL languages (Verilog and VHDL), it is crucial to address the growing concerns for accuracy and capability. Other timing standards such as Standard Delay Format (SDF), and a new proposed Delay Calculator Language (DCL), also will assist in meeting the challenges of design verification. With the added focus on these standards, EDA vendors can put emphasis on value-added enhancements and not language semantics or simulator behavior. To get a deeper appreciation of these accuracy concerns, a more in-depth analysis of such issues as pulse filtering, signal skew, interconnect, delay selection, accurate delay modeling, and memory modeling is needed.

Pulse Filtering Most simulators, including those compliant with VITAL and Verilog, allow for two different pulse-filtering modes: inertial and transport. The inertial mode, better suited to older technologies, filters out all input transitions smaller than the gate propagation value. As geometries continue to shrink, the transport delay mode has become more important. All pulses, no matter how short, are allowed to propagate to the device output, and many simulators allow the propagated pulse to contain an "X" state to indicate ambiguity in the signal value. This is important because during this region of uncertainty, the actual amplitude and duration of the signal is unclear and, depending on the technology, can cause the devices to fail. These problems can result in either nonfunctional silicon or yield problems when these glitches drive direct-action signals on sequential elements such as clocks, resets, and sets.

Any time that another event is scheduled on a device output before an already scheduled event has a chance to mature, the second event is considered to be preemptive. When the second event is scheduled after the first event, it is a positive preemptive event; when it is scheduled prior to the first event, it is a negative preemptive event.

In the first example, the rising event on the buffer at time 1 schedules a rising event on the output at time 5 (1 + 4) (Fig. 1). At time 2, the narrow pulse closes, thus scheduling the output to return to its current state at time 8 (2 + 6). This is a positive preemptive event, since the second event is scheduled to occur after the first. If this signal were driving a clock input of a sequential device and the amplitude and duration met the criteria for minimum clock pulse width, there would be a potential for clocking in new unwanted data into the device. Even Spice simulations using a 0.8-µm process have indicated that narrow glitches around 80% of the required signal width can cause problems in sequential devices. The problem only gets worse with deep submicron technologies.

In the second example, the falling edge of the input signal at time 14 schedules the output to change at time 20 (14+6) (Fig. 1, again). The rising edge at time 15 schedules the output to return back to its original high value at time 19 (15+4). Since the second event is scheduled prior to the first unmatured event, the new event is negative preemptive.

Unlike the positive preemptive pulse that is being caught by VITAL and Verilog simulators today, the negative preemptive pulses have been ignored. As shown by the Spice waveform, both events are real and are analog, not digital, in nature. Device delays given to the logic simulators ensure that the device output level is at the level to guarantee that it can effect the input of the gates it drives. In reality, this output begins affecting its driving gate much sooner, and all pulses must be considered for accurate simulation results.

This example is very simple and probably not as likely to happen as the situation caused on multiple input gates such as NANDs, ANDs, ORs, and NORs used to create decoded signal events. In these cases, nearly simultaneously switching inputs can cause the outputs to swing in one direction only to have another input's transition cause it to swing back to its original logic level. In fully synchronous circuits, this condition is not a significant problem, but in circuits that contain some asynchronous parts, it is a problem.

Due to the efforts of the Verilog standards committees (OVI and IEEE 1364), the pulse problem has been corrected in Verilog. Some EDA companies have already released production versions of their simulators with these changes in early 1997. Several other EDA companies are in the midst of implementing the changes to bring their simulators into compliance. The VITAL standards committee is also finalizing the same changes for the 1998 reballot of the standard to ensure VITAL users have the same level of accuracy. As long as designers continue using design styles other than fully synchronous ones, it's imperative that standards committees address and implement changes to the language that will provide the required accuracy.

Negative Timing Constraints Sequential devices not only have propagation delays, but certain timing relationships between various primary inputs, that must be met for stable operation. In some cases, these relationships can become negative, as in the case of hold times. This is the time that the data input must stay in a stable state from the active transition of the clock. Depending on how the device was built and modeled, this could become a critical relationship to maintain. Since digital simulators by nature do not maintain negative time, it is common practice to move all negative values to zero. Device performance and accuracy requirements have now required that simulators maintain this relationship and make the appropriate adjustments to all the remaining timing relationships.Modeling And Skew In older technologies, assumptions such as sharp signal edges, simple linear equations, and lumped wire interconnects were sufficient. With the signals looking more like "analog" sinusoidal waveforms and with the increasing role that interconnects are playing in the overall delay, it is important to have complex modeling equations for the delays.

What were once crude functional models of memory behavior have now become very accurate models of both the functionality and timing. Designs are using ever increasing numbers of memories to perform complex functions. Memories have gone from the simple synchronous single-port RAM to asynchronous, self-timed, multiport, ROMs, and EEPROMS. There is discussion of putting large embedded dynamic RAMs in logic devices. Memories once restricted to standard cell designs are now available for quick turn devices through embedded gate arrays. To ensure the entire design is functional, the memories must contain the same level of accuracy as any other cell in the library.

Signal skew has normally been associated with a design's master clocking scheme, but it also can include certain scan devices and other complex functions with special clock requirements. For this reason, signal skew is an integral part of the verification process.

Interconnects In older technologies, a lumped capacitance for all device receivers on a net was considered sufficient. As technology has shrunk, wiring interconnect has become a significant portion of the overall delay. For deep submicron technologies, the time required for a signal to propagate from the signal driver to its receiver is at least as long as the device's internal switching time.

A new approach is required to accurately reflect the part that interconnect plays in the overall delay. One method being considered by ASIC vendors is called Elmore delay. This delay model takes into account the individual RC (resistance and capacitance) for each of the wiring segments. Parasitic extractors for 2-1/2D and 3D designs are also becoming necessary to provide accurate representation of the interconnect delay. Another issue facing verification accuracy is the multiple receiver/driver scenario. This is common within large clock or bus networks in which each receiver can be driven by multiple sources, and the actual device delays and individual interconnect delays are different depending on which driver(s) is/are active. Controlling interconnect delays also play an important role in balancing clock trees.

Delay Selection With devices getting faster, it is critical to select the appropriate delay. This is important when selecting the proper driver in a multiple receiver/driver situation, and in more complex functions when different timing arcs can cause an output to go to a particular state. When the longest path delay is scheduled first and subsequent inputs schedule the output to transition in the same direction, but at an earlier time, it is possible to get an erroneous early scheduled delay for the output transition. In some devices this would not become a problem, but in a more complex cells like an adder this can become a problem. Also, in more simple combinational gates like a 5-input NAND gate, the delay difference between the timing arc closest to the node and the one farthest from the node can be as much as 100%. In deep submicron designs where tens of pico-seconds are counted, these differences in delay must be accounted for.Verification Tools With deep submicron designs, the front and back ends can no longer be decoupled. Finding a logic or timing problem during post-layout verification and going back to resynthesize the entire block effected may no longer be an option, since it may introduce additional problems with the re-spin. Incremental in-place optimization to minimize the iterations between the physical and logical design representations is considered a must. The tools must become better at predicting the final design characteristics at higher levels of abstraction to minimize these design iterations. Hardware/software co-verification also must become more mature for the system-on-a-chip concept to become a reality.

To achieve maximum productivity for developing large ASICs , designers must use logic-synthesis tools. The number of gates per day a designer can create with synthesis tools has been estimated to be at least 10X those designed using schematic capture. Whether a company elects to begin a development with synthesis at the behavioral level or at the RTL level depends on many factors such as the maturity of the relatively new behavioral compilers. Few floorplanners are available to end customers. What is available is usually offered by place-and-route tool vendors and are tightly coupled to only one router.

What this means is that the customer would be tied to one specific ASIC vendor or a group of vendors that use that particular router. End customers will either be required to have floorplanners compatible with the ASIC vendor's place-and-route tools, or have a close relationship with the vendor such that the vendor will perform the early floorplanning of the device to ensure this tight front-to-back-end correlation.

As the challenges of deep submicron continue to mount, ASIC and EDA vendors have begun teaming up to develop tools. A tighter coupling between the physical and logical representations of the design at even the earliest stages of development is necessary for the end product to be developed successfully. Parasitic extraction, power analysis, and signal integrity are the result of some of these relationships. Having point tools alone may not be enough and a tight coupling between various tools is becoming increasingly necessary (Fig. 2).

Developers of large ASICs are realizing that iterative cycles between pre- and post-synthesis as well as synthesis to physical design can be very costly and time consuming, with the potential of nonconvergence toward the target performance being high. Rather than doing a total re-spin through synthesis each time, in-place optimization is essential for controlling these design iterations. Whether this can be accomplished with a central data model or better ways of passing files has yet to be determined.

Logic Verification Cycle-based simulators (CBS) are being touted as next-generation verification tools because of their superior performance over event-based simulators. There are some trade-offs, however, that must be made when using these tools. CBSs make the assumption that all timing in the circuit is met and only a functional verification is required. When CBSs are used, static timing verifiers also must be used to ensure that proper timing is met.

These new tools may require design style changes since only fully synchronous designs can be completely validated on a CBS, and not all synthesized designs are compatible with these simulators. EDA vendors and end-users who understand the whole picture recognize these limitations and caution the users on when and how to use cycle simulators. Used properly, they provide users with much faster verification cycles at little or no risk to design integrity, especially in the beginning of the design verification cycle.

According to several ASIC vendors, the percentage of fully synchronous designs they receive from the merchant market is less than 10%. About 80% of them are considered mostly synchronous, but for various reasons fall short of fitting into this elite group. The numbers for ASIC vendors, in which most designs were internal, were much higher since synchronous designs were mandated by the company. Most are still relying on accuracy achieved from commercial event-based (timing) simulators.

Unlike event-based simulators that have a minimum of 4 or 5 states to define the various logic values, most cycle-based simulators have only two states, "1" and "0," to boost performance. This can compromise simulation verification results, especially during the initialization phase, since the beginning states cannot take on the "X" or "U" value. Therefore, it may be unclear if any design flaws existed that could have prevented it from coming out of initialization or if such timing problems as glitches, bus contention, or floating buses, exist in the design.

Power compiler tools gate the clocks to conserve switching current in various portions of the logic that are not needed at a particular time. If not handled properly, this also can cause problems in a design that were otherwise considered synchronous. Since this seems to be the most widely accepted method of power conservation, simulation verification accuracy must be taken into account.

Designers must adopt new methodologies to capitalize on the benefits of cycle-based simulators. This methodology shift will not happen in the foreseeable future. As a result, giving attention to Spice-like accuracy in digital simulators remains important. Hardware accelerator and emulator companies and vendors of fast event-based simulators, such as Cadence, Avant!, Viewlogic, Mentor, and Model Technology, are banking on the fact that most designers are not going to—or may not be able to—change their design styles to fully synchronous ones, and thus will require fast event-based simulators.

Vendors of hardware emulators and hardware accelerators build special-purpose hardware that can achieve 10X to 100X performance at system and ASIC verification levels, yet maintain event-based accuracy. The demand has increased for these high-performance hardware and software solutions, a good indication that the requirement for event-simulation accuracy is alive and well. The design community must change design styles for the paradigm shift to the cycle-based/static timing verification approach to be successful.

Unlike event-based timing simulators, static timing tools do not require a set of stimulus to provide an indication of the circuit's timing. They do require the design to be fully synchronous for the results to be accurate. The purpose of these tools is to ensure that all device propagation activity occurs within a clock cycle. When asynchronous logic exists in the design, the tool cannot detect possible race conditions that can cause a glitch in the circuit and possible malfunction. Event-based simulators can provide full coverage of both timing and functionality including races, and glitches; providing they are given a complete set of input stimulus. Since it may not be possible to provide complete stimulus coverage, especially in large complex designs, static timing can provide additional verification protection.

Design Flows Engineers need to evaluate each of the available design tools in conjunction with their design methodology and styles to determine a proper fit. In some cases, it may be necessary to alter current design methodologies to match the capabilities of the tool. Where adequate tools aren't available, users (especially silicon vendors) may be forced to develop the capability "in-house" where the silicon expertise resides. This also can create a tighter coupling between the process and tools as well as tool-to-tool integration. Using commercial solutions still remains a better solution where possible, even though the cost of integrating them into a given flow can be high in terms of time and money.

The one area where a company's design methodology may vary is the high-level design entry point, i.e., behavioral or data flow (RTL). This often depends on the application and the engineering expertise within the company. Hardware/software co-verification and full system verification also may dictate at which level to begin. Until recently, all behavioral representations of the design had to be "human-synthesized" due to the lack of available tools to perform the task. Some limited behavioral synthesis tools are now available, and it is expected that similar capabilities to those offered by RTL synthesis tools will be available as these products mature.

At the behavioral or RTL level, there's no real need for the accuracy of event-driven simulation, thus using a cycle-based simulator will provide adequate functional verification. The designs need to be split into approximately 10-kgate blocks to meet the limitations of most synthesis tools, however newer products on the market are claiming greater capacity. Configuration files are used to specify timing, power, and any other special requirements of the design.

Formal verification can be an excellent way to perform early design verification between this new structural representation of the design against its RTL predecessor. Design for test also must be considered at this point since testing of these large complex devices is becoming increasingly difficult and time consuming. Built-in self test (BIST) is now being considered for the entire ASIC due to testing complexities and time to validate production units. This is particularly applicable to complex IPs.

Once the design is synthesized into structural blocks, it can be subjected to analysis tools such as dynamic simulation, power analysis, and static timing. At this point, it's important to know if the design meets the criteria for fully synchronous designs. Some ASIC vendors like AMI have tools that analyze the design for not only design problems, but also to determine its compliance with fully synchronous design styles. For designs that meet the criteria, cycle-based simulation and static timing analysis can provide adequate validation.

For designs that do not, cycle-based simulators may still be an option for early validation, followed by full event simulation verification before releasing the design to the ASIC vendor. Since wire interconnect is a dominant part of the overall delay, floorplanners must become an integral part of the early synthesis steps as well as pre and post layout if design iterations are to be minimized. This applies to not only event-driven simulators, but also to static timing, and power- and noise-analysis tools.

The final sign-off validation requirements are still in the hands of the ASIC foundry which is responsible for ensuring the design is manufacturable. In the past, most ASIC vendors had either an internal proprietary "golden" simulator or used a commercial tool they trusted for final verification of the design. Now with the emergence of the HDL standards, most are providing sign-off support for various simulators that they have certified to be in compliance with the standards. With this in place, qualified customers can now perform the sign-off registration at their facility instead of passing off the design to the foundry, where registration differences may be found between the simulator the customer used versus the internal "golden" simulator. This puts the onus on the standards committees to ensure these languages are equal to the task.

What's Next? Each new generation of technology will continue to bring with it new challenges that will likely require additional standards. One thing we must not do is to allow the release of standards to impact our ability to move forward. Both HDL language standards committess are preparing for a 1998 reballot. Silicon and tool vendors must work closely with the standards committees to ensure the standards keep pace.

Integrating the various EDA tools in a cohesive tightly coupled design environment may not be enough to ensure success. Silicon vendors may need to develop internal tools to augment those offered commercially. If the past 15 years are any indication of what this industry can do, the future looks bright as long as the team is working together.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!