Electronic Design

Learn "The Rules Of The Game" For Powering Today's Processors

Load lines and “droop” are good features to have in your power-supply-design arsenal—if you know how to apply them correctly.

DESIGN VIEW is the summary of the complete DESIGN SOLUTION contributed article, which begins on Page 2.

Rapidly evolving processors consistently challenge power-supply designers to solve complex power-delivery issues for desktops, workstations, and servers. In fact, powering the microprocessor presents the most difficult power solution in the whole computer system.

Faster transients and improved operation over temperature are just a few of the new design requirements. But the most difficult design challenge stems from the fact that the processor's specified operating-voltage tolerance keeps getting tighter and tighter. It's a bit like playing a video game in which each successive level becomes increasingly difficult. To be successful, you must first understand the rules—in this case, the processor's requirements.

But understanding these "rules of the game" means more than just memorizing the "numbers." You need to understand the requirements and potential options so that you can make better tradeoffs and decisions in your designs and products. Plus, you'll have an edge if you exploit the tricks and techniques that are available.

This article explores some of those tricks of the trade to better power delivery. For example, to simplify specifications and better define processor power requirements, Intel uses a "load-line" approach. Load line is basically an operating voltage window. If the voltage stays within the window, the power supply meets the processor specification. Another approach is a "droop" system, where the voltage decreases at a defined rate as the operating current increases.

Discussed in depth is the dynamic voltage identification (VID) spec, or VID "on-the-fly." VID allows the processor to save power by lowering its VCORE voltage while running. A voltage-regulation-module (VRM) control-chip design example is included.

HIGHLIGHTS:
"Tight Windows" Open Doors For Robust Designs To meet performance specifications, each evolution of Intel's processors demands higher currents and tighter operating-voltage windows. This misunderstood limitation is crucial to having a robust design.
Get On The Load Line Load line is the min-max operating voltage window that the power supply must meet over the load-current range.
"Droop" Without A Performance Slump Droop is the slope of the load line, or the change in output voltage divided by the change in load current. Once considered a bad thing, droop, when properly applied, has become important due to the advent of high-speed transient loads in today's processors..
Sticking To The Budget A number of items will affect a processor's operating voltage, so they must be budgeted accordingly. These include VTRANSIENT, VDROOP bus, and ripple voltage on the output capacitors caused by the inductor ripple current.
"On-The-Fly" VID The dynamic voltage identification (VID) specification, or VID on-the-fly, makes it possible for the processor to save power. VID, a parallel digital word, tells the voltage regulator what to output.

Full article begins on Page 2

Rapidly evolving processors consistently challenge power-supply designers to solve complex power-delivery issues for desktops, workstations, and servers. In fact, powering the microprocessor presents the most difficult power solution in the whole computer system.

Faster transients and improved operation over temperature are just a few of the new design requirements. But the most difficult design challenge stems from the fact that the processor’s specified operating-voltage tolerance keeps getting tighter and tighter. It’s a bit like playing a video game in which each successive level becomes increasingly difficult. To be successful, you must first understand the rules—in this case, the processor’s requirements.

But understanding these "rules of the game" means more than just memorizing the "numbers." You need to understand the requirements and potential options so that you can make better tradeoffs and decisions in your designs and products. Plus, you’ll have an edge if you know what tricks and techniques are available, and use them to your advantage.

"Tight Windows" Open Doors for Robust Designs
To meet performance specifications, each evolution of Intel’s processors demands higher currents and tighter operating-voltage "windows." If the processor voltage is too low, the processor can’t meet its maximum clock speed due to slower internal propagation delays. Conversely, reliability and operating life degrade exponentially if the processor voltage is too high.

Unfortunately, the processors’ tight upper-operating-voltage limitation is often misunderstood, yet it’s becoming more and more crucial to having a robust design. Moreover, it’s a natural factor in the evolution of the faster processors.

To get the faster speeds, smaller chip geometries are being used, which require lower operating voltages. In addition, smaller geometries mean thinner gate oxides, with a 0.13-µm process’ gate oxide of only around a dozen atoms thick! The rate at which these thin oxides degrade increases very quickly as the operating voltage of the processor’s core logic, VCORE, increases (Fig. 1). This relationship has carried us to the point now where a 50-mV increase can take up to a year off the processor’s operating life. The bottom line: even though these stringent voltage requirements present a real challenge in processor power-supply design, very important physical constraints justify them.

Get On The Load Line
To simplify specifications and better define the processor power requirements, Intel uses a "load-line" approach. Load line is the min-max operating voltage window that the power supply must meet over the load-current range. Load line is a simple concept: as long as the voltage stays within this window, under any processor load or load change, the power supply meets the processor specification. So understanding the load line is key to meeting the processor’s requirements. In fact, how well you meet it can make the difference between a marginal design and a robust, trouble-free product. But beware—although staying in the operating window is simple in concept, it’s often difficult to do in practice.

To meet the narrow voltage window of today’s processors, Intel changed its approach. Instead of specifying the operating voltage as a fixed voltage with +/− tolerances (VRM 8.x), the company changed to a "droop" system, where the voltage decreases at a defined rate as the operating current increases (Fig. 2). The window now encompasses both static and transient voltage effects

"Droop" Without A Performance Slump
"Droop" is the slope of the load line, or the change in output voltage divided by the change in load current. Droop can be represented by a resistor placed between a well-regulated power supply and the load. Either an actual resistor can be placed in the circuit or a virtual resistance created by the regulator’s control loop. Simply put, as the load current increases, the voltage drop across the equivalent series resistance increases, causing the voltage across the load to decrease or "droop."

In the past, power-supply droop was considered a bad thing, so the trend had been tighter and tighter static voltage regulation. However, with the advent of the high-speed transient loads of today’s processors, proper application of droop is a valuable feature. Also, it’s an important tool to help designers meet the stringent voltage requirements of processor applications. To understand the advantage of droop, let’s look at a load-step transient event and compare a power supply with no droop to a power supply that has droop.

Although the transient-voltage mechanism is the same for a power supply without droop, and the negative transient is also the same, the droop causes the static output-voltage level to decrease as the load increases. This means that after the transient event, the static regulated voltage will remain low and won’t return to its original value. When the load current later steps down, the positive 30-mV transient voltage will start from the lower static voltage and not reach as high an absolute positive voltage. So the negative and positive transient excursions won’t directly add, and the total excursion will equal less than the sum of the two (Fig. 3). If the droop voltage is optimized and its value equals the peak transient voltage, then the positive transient will just return VCORE to its original starting voltage. The result is a total transient excursion of just 30 mV, or half of the excursion that would have been seen without droop.

The end result: Using half as many output capacitors as in the non-droop system will give you the same transient excursion, thereby decreasing power-supply cost and board-space requirements. Or, the smaller excursion will make it easier to meet the tight operating-voltage window of the new processors. To summarize, the load line and droop are good things that greatly help the designer meet the transient response. However, a number of other items that will affect the processor’s operating voltage must be taken into account, and they must be budgeted accordingly.

Sticking To The Budget
The main items in a Vcore voltage tolerance budget are: Vtransient, or positive and negative transient excursion; Vdroop bus, caused by the voltage drop across the etch between the output caps and the processor; ripple voltage on the output caps caused by the inductor ripple current; "static regulation" of the voltage regulation module (VRM) control chip; and temperature effects.

Figure 4 shows an actual voltage budget, the various items that affect the processor voltage, and the advantages of using droop to achieve a tight operating-voltage window. As shown, droop improves the tolerance budget from ±5% or a total of 10%, to +5%/-2% for a total of 7% for this design.

To lower the output voltage as current increases, you need to measure the output current—and do so accurately over the system’s full current and temperature operating range. There are a number of ways to measure power-supply output current in a VRM. The table shows the most commonly used techniques and the advantages/disadvantages of each.

Multiple Phases, Multiple Challenges
Because today’s processors require more current than can be practically supplied by a single-phase VRM, multiple-phase VRM designs have become the standard approach for output currents over 25 A.

Going the multiple-phase route offers many advantages, including lower individual FET current, better thermal dissipation, and improved transient response. However, using multiple phases presents the new problem of how to balance the currents in the individual phases. When the phase currents aren’t well balanced, it can cause thermal problems and have a negative impact on VRM efficiency. Accurate current measurement of each phase is again critical, as it is in meeting the load line, to ensure proper phase current balance.

"On-The-Fly" VID
One final item that needs to be addressed is the dynamic voltage identification (VID) specification, sometimes called the VID on-the-fly specification. VID, a parallel digital word, tells the voltage regulator what to output. Dynamic VID allows the processor to save power by lowering its Vcore voltage while running. This voltage change can be rather large, up to 450 mV, and has to be completed within certain time limits. Also, the current-limit set point must be considered: When going from a low to a high voltage, the VRM has to support the increased processor current plus the current going into the output capacitors to get the voltage to slew at the 2.5-mV/µs rate. Special consideration also needs to be taken when going from a high to a low voltage, because the processor current alone may not be able to discharge the output voltage fast enough.

Caution: Some VRM controllers don’t allow the output to sink current. Others that sink current can’t monitor the output current when it’s negative (sinking). This may be acceptable in lower-current applications. But it could cause instabilities and excessive undershoot when operating in this mode, especially at higher currents and fast load steps.

Selecting the right VRM controller is crucial to ending up with a stable, robust design while still meeting the aggressive performance required for proper operation of today’s processors. A number of questions should be asked when making a selection:

  1. Will the final design be accurate enough to get the CPU power supplier’s approval? For example, Intel has currently approved only inductor-current sensing to set the load line, whereas some manufacturers are still using Rds sensing.
  2. Does the design require a narrow set of component values, or can a wider range of components be utilized so that you can use your own preferred components?
  3. Is the chip versatile enough to be employed in a number of different applications? And is it accurate and flexible enough for use in future Intel processors and flexible-motherboard (FMB) designs? (Intel specifies an FMB power requirement that’s the maximum any processor in a family will require, now and in future releases.) Important parameters in this respect would be operating-frequency range, selectable number of phases, adjustable load line, etc.
  4. Are the architectures proven, and do they have a good track record? Totally new architectures, while often exciting, carry additional risks and possible surprises. Can your project schedule support these risks?
  5. What about support documentation and design tools for the controller? Is there sufficient information to understand how the chip works—and in enough detail—to make a solid design as well as help in debug and bring-up of the first board? Are the design tools intuitive enough to simplify the design process?
  6. Finally, and most importantly, is the cost/performance of the final design acceptable? The lowest-cost implementation that doesn’t meet specifications isn’t a good tradeoff. And a high-performing design that puts the end product at a cost disadvantage won’t win praises from management. From a supply-chain standpoint, also consider the availability of alternate sources for the device.

Design Example
A good example of a well-balanced VRM10 control chip is the FAN5019 multiphase controller. The FAN5019 can be used in 2-, 3-, or 4-phase VRM10 applications. Figure 5 shows the schematic of a three-phase FAN5019 design.

As the design example shows, the FAN5019 uses a combination of bottom FET current sensing and inductor RDC current sensing to take advantage of each approach. The FET current-sensing approach gives good current balance between phases. And the inductor current-sense circuit provides accurate setting of the output droop voltage to meet tight load-line requirements.

The FAN5019 controller exploits the best architectural trade-offs of the various schemes mentioned in this article. Its full data sheet includes a comprehensive, step-by-step design guide. An extensive Mathcad design file allows designers to easily optimize the design for their particular application with their selected components.

Additional benefits include the fact that it’s second-sourced, providing an alternate source for logistic planning. A companion device, the FAN5009 driver, is available in an MLP package. It offers the key advantage of higher power dissipation—a critical factor in higher frequency designs.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish