Powerelectronics 1423 S

Digital Control Squeezes 40 A from Buck Converter

Aug. 1, 2007
Using digital-loop control, a PWM controller minimizes duty cycle to optimize efficiency of a single-phase synchronous buck converter and boost current delivery.

Today's bandwidth requirements are increasing at a record pace with the growing popularity of data-intensive activities such as online voice, video, gaming and commerce. Data centers are increasing their capacity accordingly, but they are facing tremendous challenges related to the cost and availability of electricity, as well as the growing thermal stresses produced by the increased computing requirements. Therefore, it is becoming critical that the latest power designs offer efficient operation that ensures bandwidth requirements will be met, while conserving power usage and heat generation.

Digital power control offers a means of increasing power-conversion efficiency in servers, while also simplifying the design. Applying digital loop control enables the design of more-efficient buck converters, which also permits higher current per phase. Using a digital controller IC that implements a unique gate-drive control algorithm, a single-phase synchronous buck converter delivers 40 A of output at low voltage. This design also achieves a 2% efficiency improvement over existing solutions.

Modified Buck Converter

Digital implementations of power converters have often been considered to be less efficient than analog implementations. However, new digital PWM controllers are capable of achieving efficiencies on par or better than conventional analog designs. At the same time, they can extend performance limits.

A variation on the standard buck converter design features a second power train in parallel with the first one (Fig. 1). This design, which is built around Zilker Labs' ZL2005 digital power controller, is still a single-phase IC, but contains a second set of MOSFETs and a second inductor. With its gate-drive control algorithm that adjusts deadtime control, this particular controller enables the design to be optimized for efficiency, while also enabling high current output from single-phase operation. This design will deliver 40 A, which exceeds the 20-A to 30-A limit, usually encountered with single-phase control.

The converter will be designed to convert a 12-V input to a 1.8-V or 1-V output at up to 40 A. The main design goal is to achieve the highest efficiency possible while maintaining desirable transient performance. The targeted maximum output ripple for this design is 10 mV, and the target transient response should remain within 3% of the output voltage during a 25% current load.

The dual-power-train design allows for the use of standard components, because most of the available inductors are rated for 30 A. It also enables the use of lower-profile inductors.

Since the design is being optimized for efficiency, it's worth reviewing all the design elements that contribute losses in the buck converter. As in the standard buck design, each component in the modified power stage of the buck converter dissipates power. The input and output capacitors dissipate power in their equivalent series resistances (ESR) proportional to the ripple current flowing through them.

The inductors dissipate power due to their winding and core material losses. Core loss is proportional to the ripple current flowing through the inductor and the frequency of the ripple.

The synchronous MOSFET (QL) dissipates power in two ways: in its channel resistance (RDSON) as a function of current and in the gate-drive current needed to turn the MOSFET on and off.

The gate-drive current loss is proportional to frequency. Likewise, the control MOSFET (QH) also dissipates power in its RDSON and gate-drive current, as well as in its turn-on and turn-off transitions.

The power dissipated in these transitions, called switching loss, is proportional to frequency. Because many of the power-stage component losses are proportional to frequency, increasing frequency increases power loss and thus lowers efficiency.

The on-time of the control MOSFET QH sets the conversion ratio from the input voltage to the output voltage. When QH is off, the inductor current continues to flow in the synchronous MOSFET QL.

To avoid a short circuit across the input-voltage supply, the ZL2005 must ensure that QH and QL are not on at the same time (i.e., cross conduction). When both QH and QL are off, the condition is called deadtime.

During the deadtime, the inductor current must flow through the parasitic drain diode in QL. The voltage drop and the resulting power loss in this diode are greater than what would occur if the current were flowing in the drain of QL. Therefore, the deadtime should be minimized, but not to the extent that the MOSFETs cross conduct. This situation results in an optimum value of the deadtime for both the rising and falling transitions of high-side gate drive (GH). If the MOSFET timing varies from this optimum in either direction, the efficiency will be reduced.

Deadtime Control

Zilker Labs' Digital-DC technology incorporates an algorithm that continuously optimizes the MOSFET deadtime based on the efficiency of the power stage. Typical analog PWMs with this feature try to minimize the deadtime, but in doing so, may get so low that they start to encounter cross conduction caused by variations in the capacitive nature of the MOSFETs used (this can vary from lot to lot).

In contrast, the Digital-DC architecture continuously tries to optimize the efficiency by looking for the minimum duty cycle based on a given input-/output-voltage ratio. This minimum duty cycle corresponds to the highest efficiency. Note that the optimal efficiency point doesn't always occur at the lowest deadtime setting.

Additionally, the algorithm closes its loop around the power-train components, so variations in the FET capacitance or other parameters are captured and compensated for in the calculation.

It is important to mention that the control loops in the ZL2005 are entirely implemented in mixed-signal hardware. There is no intervention of a microcontroller block or DSP function in processing the loop control signals in real time. In this way, we obtain the performance of an analog controller without the excessive power dissipations and clock frequencies normally associated with a pure digital implementation.

In the ZL2005, the output-voltage error signal is converted through an A-D converter and is processed through the device control law. The hybrid PWM approach of the controller used in this design processes the resulting digital information and translates the timing information (duty cycle D and its complement D9) as the inputs for the PWM driver. A proprietary architecture and algorithm allow this duty-cycle information to be extremely precise, achieving a resolution ranging from 0.3 µs at 200 kHz down to 30 ps at 2 MHz.

Though beyond the scope of this discussion, the Digital-DC architecture also enables implementation of power-management functions such as tracking, margining, monitoring and sequencing without adding extra components. To control these functions, the ZL2005 supports the PMBus standard command set. Controller operation also may be configured with pin strapping, which is the approach taken in this buck-converter design.

Optimizing Efficiency

The design of the buck power stage requires several compromises among size, efficiency, electrical performance and cost. Size can be decreased by increasing the switching frequency at the expense of efficiency. Cost can be minimized by using through-hole inductors and capacitors. However, these components are physically large and may not have as good of electrical performance as surface-mount components.

Frequency Selection

To start the design, the operating frequency must be selected. This frequency is a starting point and may be adjusted as the design progresses. Table 1 summarizes some of the frequency ranges used in popular applications. For our example, we will select a switching frequency of 300 kHz to maximize efficiency.

Digital-DC technology allows the designer to adjust the frequency without changing any components on the board. This allows the designer to select the optimal frequency for highest efficiency after meeting all other design goals. The frequency can be adjusted to predefined values by pinstrapping a dedicated pin to one of the three states available (high, floating or ground). This also can be accomplished via the SMBus interface where a frequency can be set from 200 kHz to 2 MHz.

Inductor Selection

When selecting an output inductor, several tradeoffs must be considered. Inductance must be sufficient to generate a low ripple current (IOPP). Low ripple current will allow for the use of smaller output capacitance, while still achieving the desired output ripple voltage.

Because high-inductance values compromise output transient load performance, a balance must be struck between low ripple current that allows low output ripple and high ripple current that allows a small output deviation during transient load steps. A good starting point is to select the output inductor ripple equal to the expected load transient step magnitude (IOSTEP):

IOPP = IOSTEP.

Now the output inductance can be calculated using the following equation, where VINM is the maximum input voltage:

In the case of this 40-A design (20 A per inductor), we would use VINM = 14 V, IOPP = 10 A, FSW = 300 kHz and VOUT = 2.5 V (VOUT ranges from 1 V to 2.5 V).

Using Eq. 2, the calculated inductor would be 685 nH. We select a Pulse PG0077.801 750-nH, 1.3-mΩ, 31-A inductor. This part provides the desired inductance with relatively low series resistance (1.3 mΩ), while providing sufficient peak and average current ratings. Furthermore, it is readily available in a small surface-mount package.

Using this design criterion, the ripple current IOPP will be comparable to the maximum output current step requirement. The peak inductor current (ILPK) is calculated using the following equation where IOUT is the maximum output current (average value over a full switching cycle):

Once an inductor is selected, the ESR and core losses in the inductor are calculated. Use the ESR specified in the inductor manufacturer's datasheet, Power = ESR × ILRMS.

ILRMS is given by:

where IOUT is the maximum output current.

In most cases, the inductance will have a significant variation with average load current, and the inductor ESR will have a significant variation with part temperature at operating conditions. Both these effects should be taken into account for meeting the efficiency targets. For high-current applications, it is important to select an inductor with low ESR when efficiency is critical.

Other Components

Digital technology allows the designer to calibrate the sensing element to accommodate for process and temperature variation. Combined with a temperature sensor (either internal or external), this calibration provides a more accurate reading of the current.

The calibration may be done during development testing or during final test at the board level. Several parameters can be adjusted based on real measurement of the sense element and can be stored in the device's nonvolatile memory. These parameters include the gain of the sense element (RDSON of the MOSFET in the case described below), the offset (layout- and sense-element variations), and the temperature coefficient. (For MOSFET RDSON, the coefficient is usually around 50%).

With calibration, the component selected will not have to be overdesigned because of the temperature variation, thus limiting unnecessary power losses (and additional cost). This constraint on output current extends throughout the converter design, resulting in lower requirements on other parts such as MOSFETs, input and output capacitors, as well as the inductor.

In traditional analog implementations, when using RDSON as the sensing element to set current limit, the variation with temperature on this parameter may imply more than 50% overdesign of components. From 25°C to 125°C, RDSON can increase by 50%.

In addition to this variation, the designer needs to account for the sense element process variation, which can be as high as 30% (Fig. 2). This means that for a 20-A system used over a 0°C to 125°C range, the current limit needs to be set at 38 A average, requiring the use of 45-A rated inductor and MOSFETS.

By implementing a temperature compensation and on-board calibration, the current limit can be much tighter and the setpoint accuracy less than 5%. This will reduce the current limit setpoint to 22 A average, allowing the selection of 25-A inductor and MOSFETs. The components selected will then be smaller and less expensive, while providing a more precise protection.

With these requirements in mind, the IRF6635 is selected as the low-side MOSFET. Rated for 25 A of drain current at 70°C, the IRF6636 has a very low RDSON (1.8 mΩ at 4.5 V), which will minimize conduction loss. Two MOSFETs are used in parallel to spread the current and retain the current rating of the device (also done on the high side).

For the high-side MOSFET, the IRF6636 is selected due to its low gate charge (QG), which minimizes switching loss. For this specific application, where the stepdown ratio from input to output is large, the high-side MOSFET will not be on for a long time and most of the loss occurring will be switching loss.

Input and output capacitance are selected to meet the overall transient target and minimize input and output ripple currents. For high ripple currents, a low capacitance value can cause a significant amount of output-voltage ripple. Likewise, in high transient load steps, a relatively large amount of capacitance is needed to minimize the output-voltage deviation while the inductor current ramps up or down to the new steady-state output-current value.

As a starting point, apportion one-half of the output-voltage ripple to the capacitor ESR and the other half to capacitance, as indicated in the following equations:

With a target of 3 mV ripple at an output voltage of 1 V, the calculated COUT from Eq. 5 would be 3000 µF. To provide some design margin, 4400 µF of output capacitance has been selected for this design.

Input capacitance is determined primarily by the ripple current present on the input of the buck converter. This ripple can be determined from the following equation:

For this design, the RMS ripple current is calculated as:

The input capacitors should be rated at 1.4 times the ripple current calculated above to assure a 50% power derating. Ceramic capacitors with X7R or X5R dielectric with low ESR and 1.1 times the maximum expected input voltage are recommended. A combination of ceramic- and low-ESR organic or polymer electrolytic capacitors can be used to minimize cost and physical size.

The ripple current on each part can be determined by applying the total ripple current found above to the entire collection of input capacitors. Calculate ripple current for each capacitor using the current divider formulas, taking into account the impedance of each capacitor at the switching frequency.

By using the above techniques, the components selected are optimized for the given operating conditions, as well as a cost-optimized solution. Designers always have the option of improving efficiency further by spending more money on better components with lower loss characteristics. However, practical commercial solutions will improve efficiency to the point where the cost increase is a small part of the overall circuit cost.

Efficiency Result

Fig. 3 shows the measured efficiency for the buck-converter design presented here. Efficiency achieved is above 92% for a 1.8-V output voltage at a nominal current of 20 A, with a very flat curve even at maximum current (40 A).

This high-efficiency performance can be explained by many factors. The first is the use of Zilker Labs' gate-drive control algorithm. The deadtime adjusted dynamically to 4 ns and 8 ns, respectively, for the high-side and low-side MOSFETs. When using a fixed deadtime, a designer would have to increase these values to a higher number to take into account any process variation and prevent cross conduction. Additional deadtime would introduce higher body diode conduction loss in the low-side MOSFETs.

Another factor is the use of MOSFETs in parallel to reduce parasitic inductances, lower the overall resistance and better spread the heat than with a single MOSFET. In our example, two high-side and two low-side MOSFETs were used. Reducing the number of MOSFETs would increase conduction losses, while increasing the number of MOSFETs would increase switching losses.

Using proprietary pin-strap techniques, most of the configuration and setting of the ZL2005 is done without any external components. This reduces the power loss and provides more copper area for improved thermal performance. Driver strength and speed allow the use of lower-resistance MOSFETs without compromising efficiency.

Conventionally, applications requiring more than 20 A to 30 A have been designed using a two-phase solution. One of the drawbacks of using a single-phase solution is the increased current ripples. In our case, the nominal input ripple would be 50% higher than a typical two-phase solution.

Output ripple also will be reduced in a two-phase solution, though it will require less output capacitance for a given transient performance. However, at such current, the selection of the output capacitors is mainly driven by the overall resistance of the capacitor (ESR) and would lead to a similar number of components.

Two-phase solutions also require a complex current-balancing algorithm to ensure that both phases will see the same amount of current. Layout will become even more critical. This also implies using additional pins for current sharing information.

The dynamic performance achieved with this single-phase design is well within the design target. A 10-A load step (2.5 A/µs), from 30 A to 40 A, leads to an output-voltage deviation of ±30 mV (3% of VOUT).

Several 20-A to 30-A solutions have been released recently, and compare as shown in Table 2. A 2% to 4% efficiency improvement has been achieved while reducing the system complexity.

Table 1. Design considerations by frequency.Frequency range Design considerations 200 kHz to 400 kHz High efficiency, larger size 400 kHz to 800 kHz Moderate efficiency, smaller size 800 kHz to 2000 kHz Lower efficiency, smallest size Table 2. Comparison of 40-A buck converters.Supplier Max IOUT Efficiency VIN = 12 V, VOUT = 1.8 V Number of phases Module supplier 1 30 A 86.50% 1 Module supplier 2 40 A 86% 2 Discrete supplier 40 A 870% 2 Zilker Labs > 40 A 88.8% 1

More on Buck Converters

Buck-Converter Design Demystified Optimizing Voltage Selection in Buck Converters Power Conversion Synthesis Part 1: Buck Converter Design Improving Efficiency in Synchronous Buck Converters

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!