A Short Guide To Making Money From Digital Power

Sept. 28, 2006
The Darnell Group's Digital Power Forum III proves there are money-making opportunities out there for engineers who can solve system-level problems with the data that power supplies make available to them.

If you think you don't care about power supplies, read this anyway. Money-making opportunities are out there for engineers who can solve system-level problems with the data that power supplies make available to them.

This insight comes from the Darnell Group's Digital Power Forum III: three days, 20 miles outside of Dallas. It's a great venue, though there's nothing around the hotel but miles and miles of 'burbs and strip malls. The folks who came were serious about digital power!

And come they did. In only the conference's third year, the attendance was "somewhere north of 300," according to Darnell's Jeff Shepard. The engineers who attended came from chip makers, server makers, and power-supply companies.

This conference heralds a broad design opportunity — I sense that a corner has been turned. It is no longer a "digital challenges analog in voltage regulation" story. It became clear that digital's strength is that it can tell the system what's happening within the power supply. And therein lay the zeitgeist of the conference.

Much of the story is about data centers. In short, they use too much electricity and they get too hot. To elaborate:

• A single year's electric bills for a modern data center add up to the cost of buying the land and building the structure itself.

• Most data centers will remain in North America, where the power grid is very fragile and likely to stay that way. So, they have multiply-redundant UPS systems and redundant utilities bringing in the juice. Are the big guys worried about transmission losses or grid failure? Consider that Google's latest data center is being built next to the Dalles Dam on the Columbia River, a hundred miles or so upstream of Portland, Oregon; that's not for the great wind-surfing there. Look at every power-conversion step as an opportunity.

• A single cabinet of blade servers today may consume as much as 12 kW, but we're headed for data centers in which each cabinet consumes 50 kW! A rule of thumb: Whatever power you're burning at the rack, you need to buy twice that much power from the utility. That's to account for losses at every conversion step, as well as losses you get when power supplies run inefficiently at less than full load. (Then you spend some more money for the cooling you need just to deal with the heat from those losses.)

So a lot of schemes can make your data center more efficient. Many of the presentations at the conference dealt with processor and memory architectures. Others dealt with "virtualization," a ubiquitous word these days, which in this case means having operating systems smart enough to move threads from platform to platform dynamically. That way, the minimum number of servers needs to be powered up at any given time and you can take advantage of the active supplies' high efficiency at full load.

That last point starts to show the advantages of digital power supplies. They can tell the operating system (OS) about load variations: How much current is being drawn right now? The OS can then instruct them to shut down selected voltage rails while stepping down power on others.

Right now, though, on the one hand we have power-supply chips with standardized interfaces for that kind of communication. On the other, we have distressingly few engineers on the systems side who know what to do with this capability or how to accomplish it.

Moreover, there is an organizational structure that has left many systems engineers unaware of either the extent of the problem or the availability of potential solutions. In many companies, the physical plant of the data center (the facility that uses and pays for all the electrical power) is in one silo, and the IT department (the group that demands performance and sells quality of service to the rest of the company) is in another.

I sense design opportunities here. There is a need for chip makers to get their message across, not in terms of the "how-to" basics of making supplies ramp and sequence properly, but in terms of the "what-to" story — that is, if you do this and this, you can save energy. Primarion's Ron Van Dell agrees that reference designs and applications information sell chips, an opinion that Power-One's Mark Wells essentially echoes. The more often and the more effectively that happens, the more we can successfully deal with these problems.

Perhaps the most forward-thinking proposal along those lines comes from Jim Templeton of Zilker Labs. He points out that each blade in a cabinet contains multiple supplies, each capable of reporting local ambient temperature. This can potentially yield an almost real-time 3-D model of temperatures inside the cabinet, which in turn could allow the system to virtualize threads with an eye to minimizing heat stress on other components in the cabinet. It could also, Templeton says, provide instructive information on cooling airflow.

I don't mean to give the impression that digital power-supply chip design is now cut-and-dried. There is still a lot happening there. Texas Instruments, for example, introduced family extensions that provide control of up to 16 supplies from one controller. And probably the boldest challenge to chip makers and power-supply engineers came from Intel's Ed Stanford in the Q&A after the plenary sessions. Why, he wanted to know, couldn't we move beyond simply emulating an analog PID controller in the digital domain and take advantage of digital's power to do something truly different in the way of voltage control? Those three days in Dallas played host to plenty of smart folks both ready and willing to take up Stanford's challenge.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!