Mixed-Signal Tools Evolve To Accelerate Complex Designs

June 23, 2011
Assessment of existing mixed-signal tools for engineers who are not themselves chip designers.

Fig 1. The flow for a mixed-signal IC design involves many steps and iterations. They can be automated, but the time-killer used to be the Spice simulations. This has led to modeling functional blocks at a higher level than transistors. The challenge lies in re-integrating the higher level with the design flow while obtaining realistic results.

Fig 2. In a representation of the transfer function of a typical ADC, the horizontal axis compares the analog input voltage with the various thresholds the converter can resolve. The vertical axis shows the corresponding output digital code. The solid staircased line shows an ideal transfer function, and the solid straight line represents the device’s actual transfer function.

Fig 3. In modeling the real-world ADC, the output of the test described in the article includes a noise floor and significant harmonics (a). Quantization noise forms the noise floor, which can be neglected for ADCs with resolutions of 8 bits and higher. The first few harmonics (amplitude and phase) extracted from the FFT test are inserted in the model. The error in the transfer function of the ADC is then calculated as the difference between the input signal and a signal constructed from the harmonics. Calculating average INL (b) would theoretically require an infinite number of harmonics. But from a practical standpoint, just the harmonics above the noise floor are sufficient.

Fig 4. In the optimizer example in Mentor’s DesignCon paper, the circuit being designed uses series-shunt feedback. The series-shunt pair, Q1 and Q2, governs its ac performance. Emitter-follower Q3 buffers load RL. The input bias current for Q1 is derived from a separate feedback loop that is decoupled at most frequencies by CD. Input and output are ac-coupled.

It’s tempting for analog engineers who deal with circuits on the board and system level to think that most IC designers are bit-bangers who weave their magic carpets exclusively out of digital blocks. In fact, IC designers are often circuit types like us who deal with electronics at the transistor or even at the quantum-physics level. That’s true of the design of both all-digital and mixed-signal ICs.

In fact, it may not be long before exclusively digital system-on-a-chip (SoC) ICs are rare indeed. One can already see that day coming as analog ICs increase their level of integration to maximize their performance in specific kinds of products. For instance, it has become common for a new analog-to-digital converter (ADC) to be customized for a real-world application such as ultrasound or industrial control by building in what used to be a separate analog front end (AFE) ahead of it.

Given that trend, we’re not too far away from seeing the DSP and control functions that follow the ADC being integrated as well, and the day of “big-A, big-D” (very large scale integration of both analog and digital functions) will have arrived.

SIZE SQUEEZE

At the same time, atomic-level effects in chip design are becoming more subtle. One indication of that was brought home at ISSCC this year, as Gordon Gammie of Texas Instruments discussed some research that he led exploring cumulative gate delays at the 28-nm process node. (The team was made up of TI engineers and students from the Massachusetts Institute of Technology.)

The problem was in the doping of individual gates. At that scale, Gammie said, you’re only doping a handful of lattice locations in each gate. The resulting variations in gate delay made it difficult to deal with timing budgets in a chip. The team reported on its results in a 2011 ISSCC paper titled “A 28nm 0.6V Low Power Digital Signal Processor (DSP) for Mobile Applications.” For a somewhat more detailed version of what was done, see “Distributing Dopants to Doom Moore’s Law?” at www.electronicdesign.com.

THE WAY IT WAS

To understand what is changing in the way of design tools to facilitate the efforts of IC designers to give the board level higher and higher levels of mixed-signal integration, and thus more foolproof chips to work with, it’s useful to understand the arena in which the current tools are playing. (For a look further back in time, see “Early Steps,” p. xx.)

By contrasting the latest tools for mixed-signal IC design with the tools of the past, it’s easier to see the directions in which the design process is most likely headed. A look at the most recent conference papers and detailed product announcements in mixed-signal from electronic design automation (EDA) vendors reveals some direction.

As it used to be done, and as many designs continue to be done to this day, an analog design cycle would start with designers sketching out a simple analysis of the proposed circuit by hand just to get an idea of the scope of the task (Fig. 1). In accomplishing this, designers would use first-order models, with the basic circuit specifications treated as boundary conditions, to develop design equations that would reflect the way design variables affected each other.

In the next step, designers would determine the key design “knobs” that could be easily tweaked to make the new design fit the desired performance characteristics. This step and the previous step were likely to be refined iteratively. When the equations were satisfactory, they would be written into a computer script that could be run over and over to fine-tune the initial versions of the IC design.

The next step would be a Spice simulation as a reality check and an opportunity to further refine the design, especially in terms of second-order effects that were not considered in the initial rough approximations. Once again, this could involve multiple iterations.

After that, the design would be optimized, a process that often benefitted as much from the designer’s experience and intuitive understanding of the circuit as it did from the design tools, no matter how good they were. Chip design has never been a cookbook process.

Optimization still left room for some further Spice tweaking, and very likely some revisiting of previous steps as basic assumptions from step 1 would be revised to make the still-theoretical outcome fit the desired performance profile.

Efforts to obtain realistic results with the time overhead of Spice simulations continue. The sections that follow look at two results that have been reported by Mentor Graphics and one reported by Cadence Design Systems.

Simulating ADC IP using INL

Earlier this year, Mentor presented “Behavioral Modeling of the Static Transfer Function of ADCs Using INL Measurements” at the International Conference in Microelectronics (ICM) in Cairo. The paper described a modeling approach for ADCs based on modeling the static transfer function using integral nonlinearity (INL) measurements.

To use the approach, one would run a fast fourier transform (FFT) on the output of an actual ADC and extract the significant harmonics, which would then be used in a behavioral functional model to approximate the INL using a polynomial function.

By modeling in this fashion, the resulting model would be independent of the ADC type or structure, making it suitable for bottom-up system verification. The benefit of the new approach would simulate something like 300 times faster, compared to a Spice model.

After noting that analog hardware description languages (HDLs) have lagged digital HDLs significantly and have only started to be used in recent years, the authors note that INL, the deviation of the transfer function of an ADC from the ideal straight line, is exactly what is needed as a starting point (Fig. 2). Circuit designers who use discrete ADCs are inclined to at INL as a selection criterion.

FUNCTIONAL VERSUS STRUCTURAL MODELING

INL allows the use of functional rather than structural modeling. Structural modeling involves building behavioral sub-models for each the building blocks of the converter and assessing the contributions of each to overall nonlinearity. With functional modeling, the converter is modeled as a whole—that is, the transfer function of the converter is modeled. The architecture is invisible.

Now here’s where the novelty of Mentor’s new approach comes in. “Some models based on this approach require the user to provide the INL at every input threshold, i.e. (2N-1) different values,” Mentor’s presentation reported. That means 65,536 parameters for a 16-bit ADC, which is computationally intensive. “Other models generate a random INL based on a maximum deviation value given by the user,” Mentor added. That’s not accurate.

The paper proposes an empirical approach. First, “a sinusoidal input is applied to the converter. This test signal has to span the full scale to be able to evaluate the INL for the whole input range.” Then, “an FFT is performed on the output of the converter... using a suitable sampling frequency and number of FFT points” (Fig. 3). Finally, the procedure uses “a polynomial to approximate the average value of the INL.”

In tests, the authors used as a reference a 12-bit successive approximation register (SAR) ADC to which they applied a sinusoidal input across the full input-voltage range. At the output, an FFT was applied, and the harmonics were extracted. This was then modeled and simulated at the transistor level using 10 harmonics with a ramp input while the actual reference circuit was driven by a ramp slow enough to cover all output codes.

Overall test results demonstrated a high degree of correlation between the reference transfer function and the output of the model using 30 harmonics, although the results exactly overlap at only a very few points. However, by calculating the root-mean-square error (RMSE) between the model, the reference was 17.4 least significant bits (LSBs) with 30 harmonics. (It decreases with fewer harmonics.) The paper notes that “The model execution time however is independent of the number of harmonics used.”

FURTHER OPTIMIZATION

At 2010’s DesignCon, Mentor had presented Eldo-Optimizer, a new design methodology assisted by an optimization engine. Instead of forcing the designer to conform to an arbitrary sequence of procedures, Mentor said it exploits “the designer’s intuition to guide the optimization flow.” In a paper at the Conference called “An Optimizer-Assisted Design Methodology For a Two-Stage Wideband Feedback Amplifier,” the authors gave an example based on optimizing a series-shunt feedback amplifier for bandwidth.

Eldo is Mentor’s workhorse Spice simulator, and the EldoOptimizer shortcuts the iterative manual Spice simulations of the usual design process. It doesn’t make ace chip designers out of new college grads, but it does simplify the designer’s role to that initial intuitive hand analysis and identification of the major design knobs.

After that data is supplied to the optimizer, the software takes over control of the Spice engine and uses it to zero-in on a design. The example two-stage feedback amplifier in the DesignCon paper follows the design flow in detail. It works from fundamental dc, ac, and noise behaviors to a bandwidth-optimized design (Fig. 4).

For high bandwidth, the amplifier is built on a silicon-germanium (SiGe) bipolar technology, with a cutoff frequency around 150 GHz and an hFE of 300, running off a 3.3-V rail. Load RL is 2 kΩ, and the input source resistance RS is 50 Ω.

The description of the evolution of the design in the paper is very detailed and would offer a nice introduction to the analysis of dependencies and handles for someone unfamiliar with the process.

Essentially, the process is split into three phases. The purpose of each phase is argued, and corresponding results are examined. Finally, the paper concludes with an assessment for the benefits of the methodology and the leverage that the Optimizer provides.

CADENCE

Cadence, too, pursues faster simulations and greater control for the IC designer. For example, its Virtuoso AMS Designer Simulator for mixed-signal verification employs a single, executable kernel for both analog and digital simulation engines. It handles behavioral models created with any tool or format one could desire.

The simulator links the Virtuoso custom-design platform with the company’s Incisive digital verification platform, Virtuoso AMS Designer, integrating a graphical user interface (GUI), integrated embedded simulation engines, and a common verification methodology. It bidirectionally translates between digital signals and analog voltage levels.

A “power smart connect module” goes a step further, extending the Common Power Format (CPF), which defines digital low-power structures, to mixed-signal simulations. If an analog signal’s source can be traced to a digital signal that has a CPF definition, a power smart connect module can distinguish between an X (unknown) state that reflects a functional error and an X resulting from routine operating conditions.

Virtuoso AMS Designer also supports multiple levels of abstraction, allowing the design to evolve from full behavioral to full transistor levels. A hierarchy editor configures the design, facilitating the viewing and design preparation for a mixed-signal simulation.

To stay within the digital simulation environment while performing top-level SoC verification, Virtuoso and Incisive both support real number modeling (RNM). With RNM, discrete, floating-point real numbers can represent voltage levels. This lets users describe analog blocks with signal-flow models and simulate them in a digital solver at near-digital simulation speeds.

Another advantage of staying within the digital simulation environment is the availability of a metric-driven verification (MDV) methodology. MDV makes it possible to use specifications to create verification plans, measure progress, and more easily determine when verification is complete.

The MDV flow starts with automated planning. The plan specifies the verification environment requirements for a coverage-driven testbench language such as SystemVerilog or e. MDV was once used exclusively for digital-chip design, but with most of today’s chips being mixed-signal, its appeal is irresistible. A mixed-signal MDV flow takes advantage of RNM for top-level SoC verification.

Early Steps

The tools in the report are recent developments. They owe their existence to efforts that have been going on since the turn of the century and before.

For example, several years ago, noting that analog intellectual property (IP) reuse remained a manual process, Mentor Graphics introduced a technique for automatic circuit resizing between different technologies. It employed an algorithm based on knowledge extraction—that is, it extracts the features of the old design’s basic devices and blocks, etc., and generates a resized design in the new process technology.

Mentor’s methodology of those days would study the original IP and extract the design hierarchy first, then all device dimensions, currents, biasing voltages, and parasitics associated with the devices, followed by the symmetry information for each device in each subcircuit.

The challenging part lay in preserving the characteristics of each device across technologies while changing dimensions and scaling currents and node voltages. The reward was that there would be no need to run multiple simulations.

In practice, information about the source and target processes would be supplied as an ASCII file and the original IP would be supplied as a Spice netlist. The design would be extracted using an analog simulator. Mentor developed a block recognition engine that could extract basic building blocks, including differential pairs, current mirrors, level shifters, voltage references, and flip flops.

To facilitate use, the graphical user interface (GUI) for the engine provided multiple screens for process data entry, extracted design data, the migration process, layout migration, and verification.

In the process data entry fields, the user specified the path to the source and target model libraries, the source and target feature sizes, as well as source and target supply voltages. Specific constraints on device sizes such as minimum or maximum transistor lengths and widths could be specified through the process data entry fields as well.

In the design extraction window, the user would specify the path of the technology file and the path of the source netlist that contained the original design. After that, the user pushed the extract button and a window shows the hierarchy of the design.

The user could watch the different node voltages in each subcircuit in the design and specify some constraints on the node voltages or a scaling factor of the current throughout the design in the target technology. The constraints applied to the design then would be saved to what is called a directive file.

The netlist migration engine read this file to apply the specified constraints while sizing the circuit. In the netlist migration section, the user started the migration process and can watch all the parameters such as currents, voltages, small signal parameters, and parasitics for each device in each subcircuit in both the source and target netlist.

The verification section was used to run testbenches for both the source and target designs to compare their performance. The layout migration section still was under development and would enable the migration of layouts as well as netlists.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!