Memory-subsystem and module granularity-the minimum size increase created by adding another row of memory chips or a memory DIMM to a system-is a key factor when selecting the memory during the design process. Depending on the application, the memory chip's data-bus width can significantly affect cost and expandability.
In typical computer memory systems, DRAMs are used in 4-, 8-, and 16-bit-wide configurations. When aggregated to provide a 64-bit datapath, 16 DRAMs would be needed if a x4 organization is used. Similarly, just eight DRAMs are needed when using x8 devices, and only four for x16 chips (not including parity or error checking and correction considerations).
Based on today's mainstream density of 512 Mbits per chip, that translates to a memory increment of 1 Gbyte when using x4 DRAMs, 512 Mbytes when using x8 devices, and 256 Mbytes for x16 devices. Of course, you can double those numbers if 1-Gbit DRAMs take the place of 512-Mbit chips. On those DIMMs with only eight or four DRAMs, a second row of chips is often mounted on the reverse side of the module, providing a second rank and doubling the DIMM's capacity.
For most server and high-end workstation applications, the larger the increment, the better. Consequently, x4 or x8 memory chips deliver the best density options on commodity DIMMs or custom memory modules. For the high-end PCs that don't require a maximum storage capacity increment, modules based on x8 DRAMs offer a more economical alternative. For commodity PCs and low-end office computers, x8 and x16 organizations make the most economic sense-fewer memory chips means a lower module cost and, therefore, a modest upgrade cost to add 256 or 512 Mbytes to a system.
Deciding on the correct granularity also involves the system's expansion limit. Due to bus loading and board space, most PCs typically limit their memory expansion capability to between two and four DIMM sockets. That, in turn, limits how much memory can be installed.
In contrast, high-end workstations and servers often include several memory banks, each containing from four to eight registered DIMMs. But, due to bus loading, even the approach used in most servers will restrict the amount of memory. To get around that problem, server designs are moving from registered DIMMs to the new fully buffered DIMM architecture developed by Intel and the Memory Implementers Forum (www.memforum.org).