Double-data-rate (DDR) memory and DDR synchronous dynamic random-access memory (SDRAM) are popular in many computing and embedded applications. Originated by the Joint Electron Devices Engineering Council (JEDEC), the technology has evolved from the original DDR memory specification in the late 1990s to the 2000-2003 timeframe when various revisions were released culminating in DDR1 or simply DDR SDRAM.
Since then, DDR SDRAM has been eclipsed for new designs by DDR2 and more recently by DDR3 SDRAM technology. This memory is available in IC or module form. Now, DDR4 is looming. But before we get there, designers need to figure out how these new memory systems will be powered and how they will interact with the rest of their end products.
Why Is DDR Important?
When DDR2 became available, it included modifications that enabled higher clock speeds and performance above 400 MHz. As the industry ramped up, DDR3 met the needs of higher-performance systems that deliver higher peak data-transfer rates and wider (64 bits wide), as it can support data rates that are many times the clock speed. Also, techniques such as prefetch buffering and other innovative techniques have enabled dynamic performance levels.
For example, higher-performance very large-scale integration (VLSI) processors will only work with DDR3 memory due to the sustained data rates extending up to several gigahertz and the low latency needs to keep the system fed with data. As you would expect and just like the microprocessor, FPGA, and ASIC world, the power-supply requirements for these memory systems are not trivial.
DDR And Power
DDR3 memory provides a significant power reduction—approximately 30%—compared to DDR2 modules due to its nominal 1.50-V supply voltage, compared to DDR2’s 1.8 V or DDR’s 2.5 V. The 1.5-V supply voltage works well with the 90-nm fabrication technology used in the original DDR3 chips, and the latest voltages specified for DDR3 are now specified at 1.35 V.
As process technology advances, however, it’s desirable to change the voltages as needed as the geometries drop while manufacturing process technologies improve. We can see the trend toward lower voltages and higher levels of accuracy and precision mandated by the power-supply rails.
According to the JEDEC specifications, the maximum recommended voltage of 1.575 V should be considered the absolute maximum when memory stability is the foremost consideration, such as in servers or other mission-critical devices. In addition, JEDEC states that memory modules must withstand up to 1.975 V before incurring permanent damage, although they are not required to function correctly at that level.
The reduction in process geometries dictates the physics of the power rails required and the precision and accuracy required as well as the resolution of the margining applied to the memory. Memory often leads the pack with density issues, and DDR memory is no exception.
A quick scan of manufacturer datasheets indicates that less than 0.075-V accuracy is required today on the supply rails, and that will be will be heading below a volt with 0.05-V accuracy needed over temperature in the upcoming DDR4.
A recent memory scheme connecting a microprocessor to a memory system required frequency power scaling—or more simply stated, changing voltage versus frequency of operation. The memory supplier’s datasheet indicated that VDD and VDDQ had to be within 300 mV of each other at all times, and VREF could not be greater than 0.6 × VDDQ.
Memory does not do much by itself. The power system complexity increases to optimize and protect the system when the memory is connected to the rest of the system it interfaces to. This is to ensure reliability and reduce power consumption, as well as eliminate sneak paths and/or latch-up conditions in system operation.
If a system consisted of just DDR memory, with the necessary rails and termination voltages, it might be challenging enough to support the power-supply rails at the necessary voltages with the accuracy and precision required to make the memory system work. As process geometries fall, it would be advantageous to plug in new technology modules in designs without a redesign of the power-supply hardware to facilitate reuse and reduce design time.
As the complexity goes up and geometries go down in the memory and in all of the VLSI devices (including the processors) the memory interfaces with, the voltage rails drop down in voltage required at the systems level. The power system requirements become more complex as an actual system requirement recently encountered indicates (Fig. 1). This is actually a relatively simple power system of only five rails, though it’s common to see 10, 20, or more today.
When combining DDRx memory and interfacing with other VLSI components, it’s necessary to include the ability to sequence rails and add delays to rails when bringing up or shutting off rails, as processors today often have multiple rails themselves. Interacting with other devices tied to them such as memory sequencing, timing, voltage accuracy, and slew rates of startup and shutdown becomes critical, and the power-supply interactions become a significant system concern.
As the system evolves during its life-cycle, from concept through design and field deployment to end of life, what if the system requirements change as the device specifications are revised by the suppliers over the product lifetime? Memory and VLSI devices are now changing more rapidly and faster than the expected lifetimes of the products they go into.
Increasingly, it is critical for the product to be able to modify the power-supply characteristics under its own software control. The question becomes what options are available to a designer when something like this is presented to the power-supply team from the digital-software engineers, who typically debug and develop their systems with a bank of lab power supplies run by LabVIEW on a PC.
This, of course, has to be duplicated in something affordable that can be incorporated into the product—usually without much design time. A build-it-yourself design will usually require a handful of digital-to-analog converters (DACs), analog-to-digital converters (ADCs), digital potentiometers, microcontrollers, precision resistors, capacitors, pulse-width modulation (PWM) controllers, and lots of design time.
A designer could also choose to use one of the complex “system monitor/power manager” devices that incorporate some of this functionality yet still require external PWM controllers to be connected to them to begin to provide the system-level power requirements. These parts tend to be rather expensive, and you may pay for features, functions, and complexity you may not use or need. The design time also can be long, as these parts have a learning curve.
A New Approach
Another approach uses a software-programmable power solution. The PowerXR family of products from Exar incorporates digital power control, which gives designers a great deal of flexibility and a graphical user interface (GUI) so they can easily set up the system (Figure 2, Figure 3, Figure 4). No software programming knowledge is required.
This approach enables system power designers to simply answer questions and fill in boxes with information to create a successful initial design. The I2C interface on the IC allows continuous communications with the system so dynamic changes with operational system needs can be adjusted on the fly or reconfigured as needed.
The GUI can be used to develop and direct register control via the I2C interface, which is used for direct system control from the microprocessor-controller in the system itself. The GUI is available online at no charge. The resulting design is easier to implement and proven, with less design time and a great deal fewer parts.
Since no compensation network is needed with the usual resistors and capacitors, drift over time and temperature is reduced greatly. Additionally, complex slew-rate control delays and sequencing are simple to implement. As system needs change, the power system can be reconfigured even remotely without any hardware changes. Reconfigurability over the entire product life-cycle is possible and, in fact, easy to accomplish.
One of the benefits of the system-level approach is that the hardware is completely reusable. The hardware design stays constant, and reusability is easier than ever before. Designers can achieve differentiation by defining the power system in simple software and saving it on the IC.
Additionally, if three or four or 20 or more channels are needed, it’s a simple matter to daisy-chain more channels and extend the system via I2C to control more channels by adding essentially more of the same and repeating the design until the desired numbers of channels are achieved in hardware. Current levels per channel are defined in software and component selection.
Differentiation is achieved via software, so the implementation can be reused across multiple products and product lines. Additionally, economies of scale are possible when the hardware is reused and different products are differentiated only in software. This simplifies design and all areas of the operation can benefit, including the supply chain, logistics, and field service.
The implementation also allows telemetry. Once system operating data is available for debug or even remotely in the field, then, the system can be reconfigured and monitored for system health or field service diagnostics.
Power-System Solution Approach
The needs of VLSI devices including DDR memory are becoming more complex. A power-system solution approach can make implementation easier and create customer benefits that were previously impossible.
For example, designers can save energy by allowing the system to modulate and optimize its own power supply to extract the best performance per watt. This approach also makes it possible for the VLSI system to work properly while adding features and functions that will benefit end users.
Digital designers have had reprogrammable and software defined systems for years now. It’s time for power-system designs to take advantage of the same benefits cost-effectively. It’s easy to get started designing system-level power rails with low-cost evaluation boards and the free software available.
It will be no time at all before you’re surprising yourself with what’s possible under software control to implement reuse simply and cost-effectively to save design time and money, all while providing the necessary system power environment for successful VLSI-based products to work reliably over time and environmental extremes.
Reuse could be implemented with a pile of parts, such as digital potentiometers, DACs, ADCs, system controller subsystems, and PWM controllers. It takes quite a bit of design time and parts to implement, though. It also consumes money, time, and board space (Fig. 6).
Alternatively, PowerXR devices can be used to simplify the design of DDR3 and soon DDR4 memory systems as the programmability of the system can create the necessary delays and ramp rates on power-up as well as power-down. The PowerXR system can also generate the required voltages for using any termination scheme that might be desired, such as passive or active components that need the VTT voltage, which is usually required to be precise and accurate over temperature and time.
Today’s processors additionally need complex, accurate, and precise power, and they can benefit from PowerXR programmable power technology. Channels can be added and daisy-chained together to enable a system where each channel can be controlled individually regardless of how many channels are added.
Also, channels can be added in three-channel to four-channel increments. Thus, an eight-channel implementation would consist of two four-channel devices where each channel is independently designed and controlled for its unique task. A software GUI, evaluation boards, and complete design assistance are all available at www.exar.com