Nowadays, the methodologies of top-down design and bottom-up verification are well accepted standards in the world of digital design. But this wasn't always the case. Prior to the availability of hardware description languages (HDLs), when designs weren't as complex, there was no pressing need for top-down design techniques. Back then, it was actually possible to go to the detailed transistor design immediately after the release of the system specification.
Today, the top-down technique enables a methodical and systematic approach to complex designs. The methodology allows system-level design flaws to be exposed much earlier in the process, when they're much easier and less expensive to correct.
The same issues that forced digital design into top-down techniques are now pushing analog design in the same direction. In addition to becoming far more complex, today's analog designs frequently contain substantial mixed-signal circuitry, involving increased two-way interaction between the analog and digital portions. System-on-a-chip (SoC) devices feature increasingly more analog components like analog-to-digital converters (ADCs), digital-to-analog converters (DACs), phase-locked loops (PLLs), and adaptive filters—all of which are driving the demand for improved design methodologies.
Typically, analog and digital subsystems are created in isolation and never meet until IC layout. Furthermore, they're not tested together until the silicon returns from fabrication. Obviously, at this point, it becomes ex-tremely expensive to find an inverted bus signal or a faulty interaction between the analog and digital portions. An infinitely superior solution is to simulate the entire system before going to IC layout by using bottom-up verification techniques.
The primary problem hindering the change to analog top-down design and bottom-up verification has been the lack of tool support for the design process between system-level specification and transistor-level implementation, as well as between transistor implementation and chip fabrication. These missing tools are commonly referred to as "The Gap."
The overview of automated design for both digital and mixed-signal branches is depicted in Figure 1. Each branch progresses in incremental steps from specification through transistor implementation to fabrication. Because it's more fully developed, the digital branch flows smoothly from start to finish, with adequate tools in place to support each phase of development. In contrast, the mixed-signal branch is subject to a number of missing tools, as highlighted by the Gap.
Fortunately, this gap is beginning to fill with tools from a number of different vendors. Within only the last year or so, for instance, the industry has introduced behavioral models, standard analog modeling languages, and mixed-signal verification tools.
Key tools already helping to fill the Gap are libraries of behavioral models. These libraries are collections of models that represent the behavior of a device rather than mimicking its actual implementation. The models are implemented at several different levels of abstraction, ranging from, say, a simple ideal op amp to a complex multipole/zero op amp. Plus, each model has dozens of parameters to enable customization for any need. In this way, a library of only 200 parts can be used to model hundreds of thousands of functions. One example of a library of behavioral models is CommLib, produced by Mentor Graphics.
These models also provide a huge benefit in Monte Carlo and Corner Case analysis, two common analog test standards. For instance, in the bad old days, designers had to create 10 different macro-models to perform a 10-point sweep analysis. Performing Monte Carlo analysis with macro-models was virtually impossible. But luckily, behavioral models eliminate these difficulties.
Suppose a model exactly describing a block of a proprietary design isn't available. Analog HDLs enter the picture here to provide the solution. A design team can create its own models, or write custom code, around an existing model to add the necessary additional features. Designers also can purchase the source code for the library model and use it as a basis for a custom library.
Which HDL should be chosen? In these times of partnerships, joint ventures, and purchased intellectual property (IP), it's very difficult to remain isolated to one HDL. So, it's very important that the design simulator accepts all standard HDLs interchangeably—that is, Verilog, Verilog-A/MS, VHDL, VHDL-AMS, Spice, and C-level models, such as Mentor Graphics' ADVance-MS simulator.
Analog HDLs and behavioral libraries enable system designers to quickly write a block-level system model that can easily be simulated to optimize chip performance early in the design cycle. Because it's written in a standard HDL, this system design can be employed as a live specification to pass down to the transistor-level designer or out to a design subcontractor in a language that they and their tools will understand. The standard HDL system design supplies a ready-made test bench, too.
Once in the hands of the transistor-level designers, each block will be logically decomposed, in increasing levels of detail, until the final transistor design is reached. At all points, the analog simulations should run adequately with any partition at any level. This will ensure that each partition is accurately moving down toward the implementation level. Such a methodical approach can shave weeks or months off of the design cycle, helping companies meet their time-to-market deadlines.
To gain a thorough understanding of transistor-level behavior, the detailed design engineer will closely examine any differences between transistor-level design and its upper-level behavioral model. This is the ideal time to "calibrate" the model to the transistor design. If the upper-level model is a CommLib library part, the designer can use the built-in test bench to automatically stimulate and characterize the transistor-level design. If the upper-level model is a custom model, the designer can build a test bench and/or use an optimization tool, such as Mentor Graphics' OPSIM, to match the model to the transistor-level behavior.
At this point, you may wonder why you should bother with behavioral libraries and calibration. Why not just submit the transistor-level design to some smart software and let it come up with a model? Unfortunately, despite some claims to the contrary, practical model synthesis is still a long way off. Attempts at this technology rely on pre-existing templates, which are unlikely to exist for leading-edge or proprietary designs. There's no pushbutton approach to analog modeling, and from all indications, this will remain the case for some time to come.
Once the models are calibrated, the analog and digital portions can be joined for a full-chip checkerboard verification of the entire SoC. With the checkerboard technique, the team performs the full-chip simulation with a small number of blocks at the transistor level while keeping the majority of the design at the behavioral level. The procedure can be repeated with different sets of transistor-level blocks until the team is satisfied. Likewise, this technique can be employed for a post-layout full-chip simulation by using transistor blocks obtained from the parasitic extraction tool.
When designers gain experience and confidence in behavioral modeling, however, they may not bother with the checkerboard technique. Instead, they might simply perform a full-chip verification directly with the behavioral models. With an additional model calibration step, they can perform a pure-behavioral simulation after parasitic extraction as well.
No matter the technique, using a modern simulator that accepts all standard analog and digital HDLs makes full-chip verification a fairly simple process. Employing a language-independent simulator also provides the freedom to reuse major portions of the analog or digital test bench in the full-chip verification.
Moreover, there are other far-reaching benefits to using these design techniques. The models created for this design and refined for the verification are ideal starting models for the next generation of the product. At each iteration, the models continue to improve and mature along with each succeeding generation of the product (Fig. 2). With so many obvious advantages, this design methodology will soon become an indispensable part of every design process.
There are many strong business reasons that compel a switch to the top-down design methodology. Ever-increasing chip complexity and time-to-market pressures continue to outpace all current SoC development techniques. System-level design, especially mixed-signal design and partitioning, is a major cause of schedule delays. Designers need to increase their productivity, and embracing top-down design represents an important step in that direction.
In fact, given the ever-increasing complexity of analog designs today, it's no longer feasible to go immediately from specification to transistor-level design. Interim steps are required. This means that savvy analog designers will reap the benefits of creating system-level models and going step-by-step.
Too many analog designers bemoan after the fact, "If we only had high-level models earlier, then we could have optimized our effort, shortened our design schedule, and saved a lot of time and money." Without question, analog and mixed-signal design will continue to be a formidable challenge for engineers well into the 21st century. Top-down design and bottom-up verification design techniques provide a clear path for success, not only for the digital realm, but also for analog design projects.
The design of a large wireless transceiver, containing millions of transistors, will illustrate the power of top-down design and bottom-up verification, as well as the tools that make these techniques possible. The process begins with the system architects. Using building blocks from digital and analog behavioral libraries, plus custom HDL code, the architects put together a high-level model of the transceiver, including a transmitter block, a receiver block, and a clock-recovery phase-locked-loop (PLL) block. With this model, the system designers prove the feasibility of the concept.
At this point, the transmitter and receiver portions of the design are passed down to the digital design team and the PLL portion to the analog design team. The digital team uses traditional top-down design techniques, standard digital HDLs, digital synthesis, and digital simulation to complete its portion of the design.
Designing The PLL
The analog team begins by assembling a system diagram of the PLL, including a phase detector, charge pump, filter, VCO, and prescaler. Figure 3 shows how easily this can be accomplished with modern analog modeling languages powered by a modern language-neutral mixed-signal simulator, like Mentor Graphics' ADVance-MS. Alternatively, the team could assemble the PLL from models in the CommLib behavioral library. These models offer adjustable parameters and multiple levels of abstraction so that they can be tuned precisely as required.
The design team proceeds to perform various types of analyses such as open-loop analysis, linear step response, noise analysis, corner-case analysis, and Monte Carlo analysis. The team also continues to add more second- and third-order effects to the behavioral models. At each step, the results are compared to the previous step to thoroughly understand the impact of all the effects added at each level.
The models continue to be refined and further simulations are performed until the design team is ready to begin decomposing blocks to the transistor level. To conserve time, though, only the block that's currently being designed is simulated at the transistor level. All of the remaining blocks continue to be simulated with behavioral models. This technique can often speed up the simulation by many orders of magnitude.
To proceed with the step-by-step de-sign methodology, the behavior of the transistor-level de-sign must be compared to that of its upper-level behavioral model. Any differences must be understood completely and resolved. Figure 4 illustrates the simulation results for a CommLib charge pump versus the transistor-level design. In each case, the circuit works, but the responses aren't the same. The design team has to understand the differences and calibrate the behavioral model.
Using the CommLib charge-pump test bench, the differences between the transistor and behavioral designs become clear (Fig. 5). The levels of the positive and negative output currents don't match, and the risetime is significantly faster. At this point, the design team must modify the behavioral model or the transistor-level design (or both) so their behaviors are brought into precise alignment.
Once the analog and digital designs are complete, they must be tested to verify that they work together before going to layout and fabrication. In this example, the analog team determines that the test bench used by the digital team is adequate for the analog design. Because VHDL-AMS is a pure superset of VHDL, the process of joining the two designs is as simple as instantiating the clock-recovery PLL into the digital test bench and rewiring the receiver clock to the output of the PLL (Fig. 6). The automatic testing of the digital test bench verifies that the design is working properly. This design is now ready for layout and first-pass fabrication success.
Despite all of the benefits associated with analog top-down design, there remains a strong resistance against adopting this methodology. Many analog designers—at least so far—are content to maintain the status quo. Because most of the tools are new, many designers are unaware of their availability. This lack of awareness is further compounded by a natural resistance to change long-established procedures. Additionally, there's a learning curve associated with the new analog tools.