Traditionally, system designers addressed increasing price sensitivity and demands for feature-rich products by using ASICs, CPUs, dedicated microcontrollers, and memory ICs for desired product features. This hikes both the demand for power and the complexity of power management, forcing developers to consider how to support and manage multiple power-supply sources intelligently, within the strict power, thermal, and area constraints of complex modern systems.
In telecommunications networking equipment, for example, multiple layers of hardware platform management are implemented to control rack-, chassis-, and board-level network components. These layers exist whether designers are developing fully standards-compliant solutions, as in the case of AdvancedTCA, AdvancedMC, MicroTCA, or VPX approaches; an IPMI-based semicustom system management implementation; or a full-custom solution implementing intelligent power management, regulator sequencing, and voltage rail trimming.
In each of the above cases, designers face basic decisions that drive the cost, complexity, time-to-market, and risks associated with their design. Furthermore, the demands placed on the system designer increase as managed system hardware is introduced into new high-reliability application spaces like aerospace and military. New market-specific capabilities, such as unique bus structures and encryption support, are added to the design to meet these markets’ needs, and any added components represent new single points of failure for the overall system.
Managing power is particularly important in high-reliability applications. At the telecommunications service provider level, service interruptions must be avoided. However, if service is interrupted, data loss must be minimized. In military and aerospace systems, reliability concerns are even more stringent and service interruption and data loss can have life or death consequences.
As military networks are deployed in dynamically changing configurations across a widely dispersed modern battlefield, systems must maintain reliability levels while coping with extended operating temperature ranges, severe vibration environments, single-event-upset tolerance mandates, and active electronic interference from opposing forces.
Managing and controlling system power resources entails a number of different power-management techniques, including careful device selection, powersupply sequencing, monitoring, supervisory signal generation, and closed-loop trimming and margining. To implement such a power-control subsystem, designers typically either build a board-level implementation with off-the-shelf discrete powermanagement ICs or develop a custom IC design using an ASIC or FPGA platform. Each of these approaches has its benefits and drawbacks, and implementing the best intelligent power-management solution for a design involves consideration of the tradeoffs.
A power-management subsystem needs to embody several key functions to ensure proper system function and expected performance levels: supply sequencing, supervisory signal generation, trimming, and margining. Supply sequencing ensures correct startup of a device by powering up components in sequence according to their unique requirements and supply voltage range. Without such sequencing, conflicts can arise that may impair device functionality.
Supervisory signal generation comes into play when a sudden event interrupts the supply of power to the system. This technique ensures that the system will not be damaged and that the user will be minimally impacted by the interruption. For example, if a user is entering data into an application when a power interruption takes place, supervisory signal generation makes certain the device will be undamaged by the sudden powerdown, and the data and application will remain intact upon restart.
Trimming is a control function that keeps device components operating within their respective supply voltage ranges. For example, for a device with components rated at 3.3 ±0.3 V, performance and functionality aren’t ensured below or above that range. Trimming circuitry monitors power rails and adjusts as necessary to ensure the power reaching components is within their specified range(s).
Margining is the most complex and difficult of these techniques to implement. Nonetheless, it can yield interesting results for complex systems. A growing number of designers needs to dynamically alter the precise supplyvoltage value to capitalize on potential power savings and/or performance improvements.
When a device operates at the high end of its specified power range, e.g., 3.6 V for a device rated at 3.3 ±0.3 V, it will deliver the highest performance, but will consume the most power. Likewise, power consumption is minimized, while performance is somewhat compromised, at the low end of the power rail voltage operating range. The ability to selectively tweak power-supply levels within a specified range—for example, if our nominal 3.3-V power rail could be dynamically adjusted within a range from 3.25 V to 3.35 V to optimize either power consumption or performance as required by the system at any given time—creates a more optimized system design.
Power-supply margining addresses this need by continuously monitoring power rails and incrementing the rails up or down to a user-specified value within the device’s specified range. This action occurs in response to signals generated by the system requesting a move to one optimum configuration or the other.
It’s important to note that a controller may adjust an individual component away from its own isolated optimized state. When optimizing the performance versus power consumption of an entire functional system rack, a higher-level controller could determine that individual lower-level components should operate at a less efficient configuration to deliver the best overall system operation.
Unlike trimming, supply sequencing, and supervisory signal generation, margining is a power-control technique that can deliver both improved system performance and decreased power consumption.
For key power-control functions, and in particular for power margining, a closed-loop power-control subsystem is essential. Only by continuously monitoring supplies in a closed feedback loop is it possible to make the real-time corrections and adjustments demanded by these techniques.
Continued on page 2
A closed-loop power-control system has three main components (Fig. 1). An analog-to-digital converter (ADC) converts the power-rail value to digital form for processing by the controller. To support margining and the other functions, a basic ADC is typically adequate. The only consideration that may come into play when implementing the ADC is voltage resolution. Many ADCs support up to 12-bit input, but some applications that require very high-resolution margining might require 18 bits or more.
A closed-loop controller is an engine that constantly monitors and provides feedback through the analog interfaces to ensure the aggregate needs of the system are met. These needs could range from keeping operation within the specified range to dynamically optimizing power consumption. The controller performs all digital manipulations associated with power control. The operations performed are straightforward measurement, comparison, and command operations and can be handily implemented in digital logic.
The digital-to-analog converter (DAC) that translates the processed powercontrol information in this system is arguably the most difficult design challenge. A typical approach for implementing the DAC is to employ a pulse-width-modulation (PWM) output, feeding into a single-pole RC low-pass filter. This configuration is cost-effective, but can create high output ripple, since ripple voltage is a function of PWM duty cycle, PWM period, and the RC time constant.
Output ripple becomes a significant problem with operations involving small voltage increments, such as margining. Ripple values for a typical PWM DAC can range in the hundreds of millivolts— much too high for margining applications. Adjusting the RC time constant or adding second- or third-order filters can help, but these measures increase the cost, complexity, and space requirements. Figure 2 shows the ripple voltage characteristics of a typical PWM DAC.
As an alternative to the traditional PWM DAC, a DAC customized specifically to deliver low ripple output avoids these challenges in power-margining applications. By outputting narrow pulses of constant width, spread evenly over time so that the average voltage is equal to the duty cycle, the filter’s output is a dc voltage directly proportional to the duty cycle. This type of pulse train allows for much lower ripple at the output of the filter and benefits from either higher bandwidth and/or smaller R and C values.
By effectively reducing output pulse width to one clock cycle period, the ripple at the output of the downstream low-pass filter can be significantly diminished. Such a low-ripple DAC ideally only requires a single RC pole filter and limits ripple to well within the tens of microvolts range. A low-ripple DAC as described here has been implemented and proven in hardware using mixedsignal FPGA technology (see “Low-Ripple DAC Implementation,” www.electronicdesign.com, ED Online 20248).
As the system designer considers these individual capabilities, it’s often important to accommodate additional requirements for component hot-swap, power-supply variability tolerance, graceful failover to preserve service, and remote management and monitoring. Each of these, in turn, introduces yet more requirements for additional levels of power monitoring, management and sequencing, upstream communication of status and system maintenance requirements, and local fault tolerance across what quickly becomes a very complex design problem.
Designing a power-management subsystem for complex systems presents a number of challenges. In addition to the technical complexity of DAC design for power margining, designers must also grapple with keeping the power subsystem size and cost in check. They must additionally minimize the thermal load of the completed product by paying close attention to the total power consumption of the devices selected. Combining analog and digital elements presents another integration challenge. Of course, in the competitive portable marketplace, time-to-market pressure is always an issue.
Three different implementation options can be considered to address these challenges: building a system from discrete components; developing a design using ASIC technology; or developing a design using FPGA technology.
Discrete devices span the spectrum of closed-loop power-management functions, from standalone DACs and ADCs, to standalone controllers, to integrated devices that perform several functions. With discrete components, it’s possible to achieve high-voltage resolution and very good performance. Development with discrete components can also be a relatively low risk.
A board-level implementation using discrete devices, though, is almost infeasible for many implementations where space is at such a premium. Complexities also arise with designing a board-level system, such as device connectivity, noise sensitivity, and signalintegrity issues. The more components in a system, the more work is required to make sure no interference occurs between chips on the board and that noise is controlled. Dealing with these issues can extend development time and introduce cost and complexity risk into the design process.
An ASIC implementation, by contrast, offers the highest level of integration, and thus the smallest form factor. Without discrete components, noise and signal-integrity issues can be eliminated. Mixed-signal technology is increasingly available today, making it possible to realize all of the closed-loop system functionality on a single ASIC. The downsides to consider with ASIC devices are the very long development and prototyping time, high upfront cost, and the intrinsic lack of flexibility to quickly extend or modify a design. These factors can make ASIC implementation impractical for many systems.
Implementation using mixed-signal FPGA technology also enables a singlechip, customized power-control system (Fig. 3). Unlike ASICs, though, FPGA development is inexpensive and quick, reducing design risk and time-to-market substantially. FPGA implementation is also reprogrammable, so adding or changing features can be accomplished easily. This ensures better design reuse and enables a platform-based approach, allowing manufacturers to leverage hardware and software design across multiple product models.
Overall power consumption of the power-control subsystem also can see significant benefits from selecting devices that emphasize FPGA devices based on low-power technologies. On top of that, overall system design benefits from selecting nonvolatile FPGAs that don’t require additional discrete devices to reprogram them at power up.
The potential drawbacks with an FPGA approach are that any pre-configured modules, such as ADCs or DACs, might not meet the specific system demands. For example, a 12-bit ADC offered by the supplier may not meet the needs of a high-end application requiring 18-bit resolution, or a vendor’s DAC might not support low ripple output, making it impossible to use power margining to help minimize overall power consumption. However, for most applications, 12-bit ADC performance is more than adequate. Moreover, the flexibility and economies of scale enabled by flexible FPGA technology makes it the clear solution of choice for designers.
Power management is a multi-faceted challenge, but the underlying imperatives for complex system design remain unchanged: reduce overall size, increase total system reliability, reduce power consumption, lower thermal load, lower cost, and deliver flexibility to respond to market dynamics. Implementing power control with discrete devices, or in a single-chip mixed-signal ASIC or FPGA, depends largely on a designer’s technical and business constraints.
Mike Brogley, product marketing manager, System Applications and IP Marketing, holds a bachelor’s degree in aeronautics from San Jose State Univ., Calif.