The Electronic Design Automation (EDA) industry continues to face new challenges as it targets ever shrinking deep-submicron geometries. Each successive advancement of semiconductor technology has brought about a new very-deep-submicron (VDSM) phenomenon, such as heat dissipation, electromigration, and interconnect coupling. Many EDA design tools have been enhanced to deal with these issues. Now another issue, circuit aging, has come to the forefront. This phenomenon must be addressed to ensure VDSM performance.
Circuit aging refers to the deterioration of circuit performance over time. The length of time can be a few years to a few months under worst-case conditions. Circuits have always aged. But, this aging wasn't significant until the latest iteration of Moore's Law, which pushed transistor channel lengths to 0.18 µm. The simultaneous use of extremely small channel lengths and higher operating frequencies has elevated circuit aging from an academic exercise to a growing, and perhaps detrimental, concern for system-on-a-chip (SoC) designs.
Circuit aging can no longer be ignored. All portions of the SoC, whether analog, digital, or memory, will be affected. These negative impacts can include slower speeds, irregular-timing characteristics, and increased power consumption. In ex-treme cases, circuit aging may even cause functional failures to occur over time.
The predominant cause of circuit aging is the degradation of individual deep-submicron transistors. This behavior, known as hot-carrier-Induced (HCI) degradation, has been extensively studied since the early 1980s. A transistor conducts when carriers are sent from one side of the transistor, known as the source, to the other side, the drain. The force that propels these carriers is called the electric field. In VDSM transistors, these electric fields become much more intense. As a result, the carriers travel much faster, leading to the increase in speed. Such speeds, however, come with a price.
The carriers have been accelerated so much and travel so fast that upon their arrival at the drain side of a transistor, they literally shatter the surrounding silicon atoms. The violent collision that occurs, called impact ionization, results in the splitting of each atom into two new carriers: one electron and one hole (Fig. 1). The longer the transistor is in operation, the greater the number of new carriers that are generated. Both HCI and circuit aging are cumulative behaviors over time.
The newly created carriers wouldn't be so bad if they didn't cause damage to the physical structure of the transistor. Unfortunately, they do cause harm. In the NMOS-type transistor, the electrons cause damage at a particular area, between the gate oxide and the silicon surface. This interface can become populated with so-called interface "states" causing the NMOS transistor to have higher threshold voltages. As a result, they produce less current which translates into slower switching speeds. The exact amount of this decrease can be quantified by measuring the newly created holes that flow out through the silicon substrate (represented by ISUB).
For PMOS transistors, the degradation mechanism is a little different. The newly created electrons are at fault once again. This time, though, they lodge and trap themselves inside the gate oxide of the transistor. Such electron-trapping causes the PMOS transistor to have lower threshold voltages. As a result, PMOS transistors will have more current than before HCI degradation. The monitor for PMOS transistors is gate current (by IGATE).
The precise amount of degradation for a transistor is a complicated function of its bias or operating conditions. Usually, a fixed amount of device degradation is assigned. The time to reach this amount is used to gauge the robustness of the technology to HCI degradation. For example, it might take five months to reach a 10% change in the drain current of a transistor. Therefore, the lifetime of the transistor would be five months. For the past 15 years, lifetimes have rapidly decreased from roughly 10 years to just a few months. This downward spiraling trend has caught the eye of many semiconductor manufacturers, and is indicative of the severity of HCI and circuit aging.
Circuit aging is an unavoidable consequence of VDSM technology. The breadth and depth of its degradation will only expand as designs use smaller transistors and operate at faster speeds. Critics have argued that lowering the level of the power supply with each successive semiconductor-technology generation will significantly lower the electric fields inside each transistor. Furthermore, they claim, this will make HCI and circuit aging disappear. But, evidence proves otherwise (Fig. 2).
Incremental drops in power-supply voltage simply aren't enough to offset the rigorous pace of Moore's Law. Electrical fields inside transistors will still increase. HCI effects will still occur. Plus, SoCs will be prone to circuit aging. Given this fact, it's important to understand why each part of the SoC design—analog, digital, and memory—will be at risk.
Traditionally, analog circuitry has been implemented with technology that's at least one to two generations behind the current, state-of-the-art digital process. With the explosion of wireless communications devices, such luxuries can no longer be afforded. Current mixed-signal designs require real-time analog-to-digital or digital-to-analog conversion, not to mention RF transceiver capability. As a result, the analog portions of the SoC are now being designed with the same short channel lengths as their digital counterparts.
Certain principles of analog design make it very susceptible to circuit aging. First, analog circuitry is constantly biased even if it isn't in active operation. This means that HCI degradation is constantly occurring as transistors are always conducting currents. Next, analog performance is more closely linked to such characteristics as gain than to drain current levels. It has been shown that while current levels might degrade only slightly, the gain can degrade significantly. Third, HCI worsens a long-time enemy of analog designs—mismatch. This occurs when identically designed devices differ from one another. Experimental studies have shown that mismatch in differential amplifiers and current mirrors (two staple components of analog designs) are enhanced by HCI degradation. They ultimately contribute to circuit performance degradation over time.
In addition, analog design has been plagued with the so-called nonscalability of power-supply voltage. The continual reduction of the power supply limits the amount of amplification required for analog designs. Therefore, more amplification stages have to be added at a cost to other design specifications, such as area and power consumption. Many state-of-the-art technologies now have dual-mode power supplies. One is reduced for the digital portion and another higher voltage supply is for the analog portion. With the combination of short channel lengths and high power-supply voltages exacerbating analog circuit aging, the overall conclusion for analog portions of SoCs isn't good.
The outlook for the digital portion of SoCs isn't much better. Circuit aging is a strong function of the rise and fall times of input and output signals of digital blocks. The longer the transition times, the greater the amount of aging. This is due to the fact that the transistors within these blocks spend more time in degradation-prone bias conditions. The digital section of the SoC has a myriad of such transition times, owed in part to different loading conditions at different nodes. Consequently, some blocks will invariably age more rapidly. As a result, the timing relationships between their respective output waveforms might become different than their intended design specifications. Such disturbances can degrade circuit performance and place the operation of the entire digital portion in jeopardy.
This uneven aging is compounded by the fact that circuit aging also depends on switching activity. Higher switching activities mean more input and output transitions for the same period of time and, consequently, more aging. Because different parts of the design switch at different speeds, nonuniform circuit aging will again cause timing patterns to differ. Aside from switching activity, minute voltage spikes or overshoots can cause circuits to age too. Voltage overshoots as small as 0.020 V can cause dramatic increases in HCI degradation for the transistors attached to the overshooting node. This is because such voltage spikes are similar to raised power supply levels. Such is the case even if it's only for a very brief moment in time. Nonetheless, they can suddenly increase the electric fields within the affected transistors and place the transistor under stress. Voltage overshoots occur all the time in digital circuits as a result of interconnect coupling between adjacent parallel wires, or from the self-coupling between a block's input and output nodes during switching events.
The sensitive nature of circuit aging isn't lost upon the embedded memory portions of the SoC. In SRAM-type memory cells, for example, circuit aging can manifest itself through HCI degradation of the pass transistors. These transistors act as gatekeepers during the write and read phase of memory operation. Their bidirectional operation (meaning it sees twice the activity of ordinary transistors) enhances HCI degradation. The end result is an increase in access time, which is one of the most critical memory design performance specifications. Sense amplifiers, crucial for proper operation of DRAM- and SRAM-type memories, suffer from circuit aging too. This is due to the fact that they are one of the basic analog blocks and share many of the previously mentioned analog characteristics.
Simulating Circuit Aging
Circuit aging is a design phenomenon and requires the creation of design solutions to properly detect, analyze, and solve its potential problems. A good circuit-aging simulator must have the traditional levels of high capacity and high speed. Plus, it must be extremely accurate.
The first step in simulating circuit aging is developing accurate models to faithfully predict real-life degradation. The simulator will use two kinds of models. The first requires the modeling of transistor substrate and gate currents, which are the two monitors of HCI degradation. This model must be accurate over a broad range of operating conditions and different transistor sizes in order to be usable in SoC designs. Without such attributes, the simulator cannot accurately calculate the age of each individual transistor within a design. Presently, these two current types are either poorly modeled, or not modeled at all inside most simulators.
The second type of model is the degradation model. The simulator will use it to map the age of each individual transistor to its respective current degradation levels. The formulation of this type of model requires that individual transistors be constantly measured, or stressed, at an accelerated pace over a period of days. Such prolonged testing times are needed in order for the data, and the subsequent models, to represent true silicon behavior. The attention to detail introduced at the modeling stage is necessary and underscores the importance of modeling accuracy involved in simulating VDSM effects.
Furthermore, in circuit-aging simulations, the underlying simulation technology must be very precise. Voltage waveforms have to be extremely accurate for the sake of reproducing actual transistor bias conditions, which are used to calculate substrate and gate currents. The dependencies of ISUB or IGATE currents on transistor voltages are exponential in nature. Any small inaccuracies in voltages can have a large impact on current levels and, ultimately, age calculations.
For example, in a typical 0.18-µm technology, 10% error in drain voltage can result in substrate current levels that are off by approximately 150%! Such sensitivities on voltage waveforms imply that any coupling effects within the design must be captured very accurately. This is particularly true for SoCs where the high densities of interconnect wires can produce capacitive coupling between adjacent wires. Even small voltage overshoots between 0.020 and 0.050 V can increase circuit age as much as five times in some situations. To effectively simulate circuit aging, accuracy in the simulation technology as well as in the input model is of paramount importance.
To truly evaluate circuit aging in SoC designs, both dynamic and static timing approaches need to be considered at various stages in the design flow (Fig. 3). A large-capacity, high-speed, and accurate dynamic simulator can be used during the final verification phase. This is where maximum simulation accuracy is possible because of the availability of parasitic information, (like interconnect capacitance) used for calculating interconnect coupling.
In order to run a meaningful dynamic simulation, designers need to specify how many years the design is supposed to operate reliably. They need to provide a set of input-voltage waveforms or vectors as well. These hot-input vectors should be selected to cause the most switching activity, hence degradation, within the design. The idea is to evaluate the performance of the design after simulation in order to see if the functionality is still preserved after the specified years of operation. If functionality is preserved, then the next step is to examine the level of deterioration in performance results.
While a dynamic verification tool offers the most accuracy, it has some drawbacks. One is its reliance on a set of hot vectors that are responsible for true worst-case circuit aging. These are very difficult to generate. To do so, an exhaustive number of input vectors need to be randomly generated and fed into the dynamic simulator. Once this feat has been accomplished, a worst-case input vector must be located for each transistor. Then, all of these vectors have to be combined to find the worst-case degradation of the entire circuit. For any current 32-bit-input design, the number of total possible input combinations is astronomically too large to be realistically simulated during short product-design windows.
Static Analysis Eliminates Vectors
Another approach is the use of static timing analysis. By its very nature, a static approach doesn't rely on input vectors. When properly implemented, static analysis finds not only the worst-case degradation for each transistor, but also the worst-case delay paths. The static circuit-aging simulator can trace through the entire SoC, performing age calculations for each stage of the design. Worst-case conditions are used for interconnect coupling and transistor rise/fall times.
In the end, designers will receive a report detailing which paths have aged the most. Designers can then examine whether or not those delays exceeded any design specifications for proper circuit operation. If there's a problem, the static tool will output a list of devices within each path that need to be fixed. The case against static analysis is that it can produce nonphysical results. An example would be finding a delay path that doesn't exist. This is a possibility for certain logic styles making extensive use of feedback loops, whereby a dynamic simulation is the better approach.
In summary,circuit aging is a new phenomenon to SoC designs at 0.18 µm and below. Hot-carrier-induced degradation is the responsible physical mechanism. All parts of the SoC can be affected. But, the parts especially affected are those that have a combination of short channel lengths, high switching activity, and elevated power-supply voltages. Precise simulation of the effects of circuit aging on SoC performance calls for a mix of dynamic and static approaches.
Several solutions on the market can address the circuit-aging issue. For example, BTA Technology is working on tools which simulate circuit aging at the Spice and gate levels. These tools are coupled with software to accurately model the underlying physical ailment, the hot-carrier effect. Such two-prong approaches are necessary in the EDA industry to successfully describe VDSM phenomena as well as migrate them into the design-tool chain flow.
Regardless of the analysis strategy, circuit aging creates a strong requirement for a simulator that's both accurate and fast. The sensitive dependencies of the underlying degradation mechanisms forbid any less-accurate solution. At the same time, the increased complexity of SoC designs requires fast simulation times. The EDA industry has responded before to similar challenges. Will it now rise up and respond to the latest one?