New Devices Embrace Energy Efficiency

March 10, 2010
Power-industry veteran Sam Davis reviews a broad range of electrical efficiency issues from a design standpoint.

Reduction in emissions

Multiphase converter

TEC in kilowatt hours

Mode weightings

The green movement is growing, as the government and industry alike are promoting energy efficiency as a way to fight global warming. Researchers believe that using more efficient products will reduce the need to generate more electricity. In turn, this will reduce the emissions produced by this generation that cause climate change.

Design engineers, however, have confronted energy efficiency issues starting with the vacuum tube era and moving on to transistors and microprocessors. Also, energy efficiency has always been a consideration because it impacts virtually every aspect of system design. Now, it is being emphasized because greenhouse gas emissions affect the entire planet.

From an electronic design standpoint, boosting energy efficiency involves establishing maximum power consumption requirements for electronic systems, effective power management for microprocessor-based systems, and minimizing standby power. Fortunately, the side effects of improved energy efficiency offer their own benefits too.

ENERGY MANAGEMENT

The Energy Efficient Digital Networks (EEDN) group at the Lawrence Berkeley National Laboratory has been reviewing distinct consumer electronic products that facilitate audio and/or video content in U.S. households. Their numbers are increasing, and current controls and interfaces focus on function without energy as a consideration.

EEDN suggests amending existing electronic industry standards to allow better power control in consumer electronic devices. Doing this will require background research to fully document the standards and other factors that affect how power control operates, as well as to design a simple and coherent system to organize amendments and future standards.

Also, EEDN is looking at products whose principal function is to provide network connectivity while using an increasing amount of energy. These include switches, routers, firewalls, modems, hubs, wireless access points, and Voice over Internet Protocol (VOIP) phones.

As the number of networked IT products increases and Power over Ethernet (PoE) and related technologies expand, the amount of power flowing into network products—as well as the savings from more efficient power conversion and control—will only increase.

EEDN says there should be energy efficiency specifications for network equipment to help manufacturers and consumers move the market to products that use less energy. The U.S. Environmental Protection Agency (EPA) Energy Star program could adopt these specifications to ensure global reach. With the wide range of network products sold today, these first specifications will only cover a few of the most energy-consuming categories.

The EPA established the Energy Star program in 1992 for energy-efficient computers. It identifies efficient products that will reliably deliver energy savings and environmental benefits. The EPA and the U.S. Department of Energy (DOE) work closely with more than 1000 manufacturers to determine the energy performance levels that must be met for a product to earn the Energy Star. Companies awarded the Energy Star emblem often use it is a marketing tool. We can expect a reduction in emissions (Fig. 1) from 2003 to 2012 because of the Energy Star program and the resultant improvements in energy management.

Many Energy Star requirements include both electronic and non-electronic products. For example, Energy Star 5.0 covers the requirements for computers, which became effective July 1, 2009. Version 5.0 also covers desktop computers, notebook computers, and workstations. It doesn’t cover computer servers, handheld devices, PDAs, or smart phones.

Power-supply efficiency requirements are applicable to all product categories covered by the Energy Star Computer Specification. Computers using an internal power supply must have 85% minimum efficiency at 50% of rated output, 82% minimum efficiency at 20% and 100% of rated output, and a power factor greater than or equal to 0.9 at 100% of rated output.

Power supplies sold with Energy Star computers must be Energy Star qualified or meet the no-load and active mode efficiency levels provided in version 2.0 of the Energy Star program requirements for single-voltage external ac-ac and ac-dc power supplies. This performance requirement also applies to multiple-voltage output external power supplies as tested in accordance with the internal power-supply test method.

Typical energy consumption (TEC) is a method of testing and comparing the energy performance of computers that focuses on the typical electricity consumed by a product while in normal operation during a representative period of time. This involves the computer’s off, sleep, and idle modes, as well as its active state when it does useful work. Table 1 lists the TEC in kilowatt hours for desktop and notebook computers for various functional categories. Typical energy consumption can be determined by:

Where:

PX = Power in watts
TX = Time values in % of a year
ETEC = kWh and represents annual energy consumption based on the mode weightings in Table 2

POWERING THE MICROPROCESSOR

As indicated by the Energy Star requirements for computers, power consumed by microprocessors and their associated circuits is the major energy efficiency factor. Microprocessor power dissipation (consumption) has been steadily increasing because of the user demand for faster processing, which requires faster clock rates.

Because power dissipation rises roughly with the square of the clock-rate increase, microprocessor power dissipation (consumption) and energy density have been rising exponentially since the first microprocessor was introduced in the 1970s. Now, the fastest processors consume as much as 120 to 150 A while requiring voltages in the neighborhood of 1 V.

With CPU power usage on the rise, new software and architectural approaches are necessary to save energy by the better matching of on-chip resources to application requirements. One approach uses sleep and suspend modes to reduce power consumption. This has led to new circuit techniques, called dynamic power management (DPM), that reduce a microprocessor’s average power dissipation by dynamically reconfiguring a system to lower power consumption during low-workload periods.

In principle, DPM identifies low-processing-requirement periods and reduces operating voltage (voltage scaling) and/or frequency (frequency scaling) to reduce operating power consumption. This technique is called dynamic voltage and frequency scaling (DVFS). Furthermore, during these low-power-requirement periods, idle circuits can be turned off to provide even lower power consumption.

Proposed DPM solutions can be categorized as either predictive or stochastic. Predictive schemes attempt to predict a device’s usage behavior in the future, based on past experience. Stochastic techniques make probabilistic assumptions based on usage-pattern observations. To be effective, DPM must account for the time it takes to change a power-supply voltage. Plus, the processor must be able to operate reliably when its supply voltage or clock rate changes in less than a microsecond.

PROCESSOR POWER MANAGEMENT

One possible power-management approach is a low-dropout (LDO) linear voltage regulator. LDOs with an internal power MOSFET or bipolar transistor can provide outputs in the 50- to 1000-mA range, which is too low for most of today’s microprocessors. Also, LDOs exhibit a typical efficiency of 60% to 70%, which is not appropriate for these applications.

The trend toward higher-current, lower-voltage microprocessors has created a need to supply more than 100 A at about 1 V. The multiphase converter answers this need. Plus, it provides high-efficiency operation. Multiphase converters employ two or more identical, interleaved converters connected so their output is a summation of the outputs of the cells; this can be seen in an example of a three-phase multiphase converter (Fig. 2).

To understand the advantages of the multiphase converter, we must first look at the characteristics of single-phase converters used to supply high current and low voltage. With a conventional single-phase converter, the output ripple and dynamic response improve with increased operating frequency. In addition, the physical size and value of the output inductor and capacitor go down at higher frequencies. Unfortunately, after the frequency reaches its upper limit, converter switching losses increase and lower the converter’s efficiency, forcing a design tradeoff between operating frequency and efficiency.

To overcome these single-phase frequency limitations, multiphase cells operate at a common frequency, but they are phase shifted so conversion switching occurs at regular intervals controlled by a common control chip. The control chip staggers the switching time of each converter so the phase angle between each converter switching is 360°/n, where n is the number of converter phases. The outputs of the converters are paralleled so the effective output ripple frequency is n × f, where f is the operating frequency of each converter. This provides better dynamic performance and significantly less decoupling capacitance than a single-phase system.

Current sharing among the multiphase cells is necessary so one does not hog too much current. Ideally, each multiphase cell should consume the same amount of current. To achieve equal current sharing, the output current for each cell must be monitored and controlled.

The multiphase approach also offers packaging advantages. Each converter delivers 1/n of the total output power, reducing the physical size and value of the magnetics employed in each phase. Also, the power semiconductors in each phase only need to handle 1/n of the total power. This spreads the internal power dissipation over multiple power devices, eliminating the concentrated heat sources and possibly the need for a heatsink. Even though this uses more components, its cost tradeoffs can be favorable. And, multiphase converters have a number of important technical advantages:

• Reduced rms current in the input filter capacitor allows use of smaller caps
• Distributed heat dissipation reduces the hotspot temperature, increasing reliability
• Higher total power capability
• Increased equivalent frequency without increased switching losses, which allows the use of smaller equivalent inductances that shorten load transient time
• Reduced ripple current in the output capacitor reduces the output ripple voltage and allows the use of smaller caps

Multiphase converters also have some disadvantages that should be considered when choosing the number of phases:

• The need for more switches and output inductors than in a single-phase design, which leads to a higher system cost than a single-phase solution, at least below a certain power level
• More complex control
• The possibility of uneven current sharing among the phases
• Added circuit layout complexity

As current requirements increase, so does the need for increasing the number of phases in the converter. ICs providing just two phases may not be adequate because of their limited output current range. An optimum design requires tradeoffs between the number of phases, current per phase, switching frequency, cost, size, and efficiency. Higher output current and lower voltage also require tighter output voltage regulation.

To evaluate multiphase design decisions, we have to review the approaches employed by available ICs. One approach is to use a pulse-width modulation (PWM) controller IC with integrated MOSFET drivers. Yet this technique presents several challenges.

First, the heating and noise generated by the on-chip gate drivers may affect controller performance. Second, for most of these chips, it is impractical to cascade them for additional phases. Next, accurate current sharing is difficult with this configuration. Finally, three phases appears to be the limit.

Another approach is to employ a separate controller and separate gate drivers. This method isolates the PWM controller from the heat and noise of the gate drivers. Also, because the current sense signal is routed to the controller, current sharing is more complex. There are additional controller-to-driver delays because of the separated ICs as well.

Still another approach is to use a dual-phase controller with integrated gate drivers and built-in synchronization and current sharing. This technique, though, only allows an even number of phases. Although it simplifies the design, it also may result in unused or redundant silicon, pins, and external components. And, the driver heat and noise generated on-chip can degrade controller performance.

STANDBY POWER

Standby power, also called vampire or phantom power, is the electricity consumed by electronic equipment when it’s switched off or in standby mode. The typical power loss per equipment is low, from 1 to 25 W. But when it’s multiplied by the billions of devices in homes and businesses, standby losses represent a significant fraction of total world electricity use.

Technical solutions exist in the form of a new generation of power transformers that use only 100 mW in standby mode and can reduce standby consumption by up to 90%. Another solution is the “smart” electronic switch, which cuts power when there is no load and restores it immediately when required.

The One Watt Initiative is an energy-saving proposal by the International Energy Agency to reduce standby-power use in all equipment to just 1 W. The International Energy Agency launched the One Watt Initiative in 1999 to ensure through international cooperation that all new equipment sold in the world only uses 1 W in standby mode by 2010.

In 2001, U.S. Executive Order 13221 stated that every government agency, “when it purchases commercially available, off-the-shelf products that use external standby power devices, or that contain an internal standby power function, shall purchase products that use no more than one watt in their standby power consuming mode.”

SIDE-EFFECT DESIGN BENEFITS

One side effect of improving energy efficiency is the extension of battery life in portable systems. In addition, space is usually limited in battery-based systems, so appropriate energy efficiency can result in simpler, less bulky heat-removal techniques. In contrast, poor energy efficiency can cause excessive heating that may require exotic and costly cooling techniques that drive up system costs and can potentially lengthen design cycles.

For all electronic systems, enhanced energy efficiency can improve a system’s reliability. Semiconductors must have controlled operating temperatures, which affects reliability as defined by their failure rate (useful system life in failures per 106 hours). The Arrhenius reliability model states that failure rate is a function of the temperature stress—the higher the stress, the higher the failure rate.

Typically, each 10°C rise in temperature increases the failure rate by 50%. Conversely, cutting the operating temperature by 10°C reduces the failure rate. Thus, failure rate and its inverse, mean time between failures (MTBF), can be improved by emphasizing energy efficiency.

Besides reliability and performance issues, semiconductor thermal management involves an economic and mechanical challenge that good energy efficiency can minimize. Cost is an important consideration that might be able to be reduced. Sizing considerations are equally important when increasingly higher-power semiconductors must be accommodated in next-generation system designs.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!