In the computer systems market, the motherboard chip sets make it a snap to implement the main memory subsystem because the chip set includes the memory controller and interface circuitry for the memory modules. The multitude of chip set options also allow a performance choice between standard SDRAM, double-data-rate (DDR) SDRAMs, or second-generation DDR (DDR2) SDRAMs, depending on the level of system performance you want to deliver.
However, in all other market segments that employ various amounts of DRAM—industrial, communications, consumer, and others—there are few off-the-shelf SDRAM interface/control solutions. Many developers of high-end embedded processors as well as general-purpose consumer and networking chip sets have a choice. They can design the logic from the ground up or select a block or two of intellectual property that must then be integrated with the rest of the system design into an application-specific IC (ASIC) or configured in an FPGA.
In just about all systems that use DRAM as the main memory, the bandwidth of the memory-to-CPU interface is often the performance-determining factor (or bottleneck) in the system, and it can thus determine the overall system performance. As such, the design of the memory interface, both the basic architectural approach and the selection of the signaling scheme (SDRAM, DDR, DDR2, or still other approaches), plays a critical role in determining the overall system throughput. Great care must be taken in the implementation of the memory controller interface that manages the transactions between the processor and the memory array.
To deliver optimal performance, the memory system must achieve the best combination of bandwidth, access latency, and capacity to meet the demands of the application software running on the processor. Depending on the system size and the cost constraints within which you must work, this is often easier said than done. The table (below) illustrates some typical applications that use DRAMs—a consumer HDTV system, a high-performance graphics subsystem, and a typical PC—and the differences in memory subsystem requirements that each brings with it.
System Tradeoffs Permit Optimization
Designers developing motherboards for the PC and server market have the most flexibility and a relatively simple design task. By selecting the appropriate motherboard chip set, you can trade various vectors to get the most performance out of slow, low-cost memories.
For instance, you can opt for a chip set that employs a wider memory interface that accesses more DRAMs concurrently for improved bandwidth, or a chip set that can buffer longer latencies by leveraging on-chip caches. And, of course, you can use a chip set with a larger address space to handle multiple memory modules and increase memory capacity. For example, some of the latest motherboard chip sets can push data transfer rates into the 6- to 8-Gbyte/s range by using a 128-bit wide dual-channel memory interface and DDR SDRAM memory modules with a total capacity of 512 Mbytes to 1 Gbyte. Such performance levels require memory modules that operate at data rates between 400 and 533 MHz. At faster data rates, even PC chip-set designers are starting to consider outsourcing their memory interface designs.