Full article begins on Page 2
As the system-on-a-chip (SoC) era marches forward, there's a pressing need to embed large amounts of memory onto a logic chip and make the resulting technology as flexible and cost-effective as possible. SRAMs don't require the overhead of refresh management, so there are always going to be small, distributed SRAM memory blocks on-chip.
Yet real estate problems crop up with SRAM. This is where the relatively new technology known as 1-Transistor Random Access Memory (1TRAM) can be implemented to replace sizable SRAM blocks. The 1TRAM is a special DRAM, and its memory cell is based on charge storage in a capacitor. Over time, charge leaks out of the storage node. The charge has to be restored periodically for the memory to operate properly. The designed-in time period of charge restore is called refresh time, or TREF.
Because 1TRAM has no single-set refresh spec associated with a process, designers must consider several factors when evaluating TREF. This article discusses those factors and explains several techniques that can improve the refresh characteristic while still keeping minimum cell size.
One key area for 1TRAM application is digital signal processing. DSP requires SRAM-style memory. Operating speed and memory density with 1TRAM are usually enough for most DSP applications.
|What Is 1TRAM?||The concept of 1-Transistor Random-Access Memory (1TRAM), which has smaller cell capacitance thanks to use of a planar DRAM cell, enables designers to craft a smaller cell in a planar CMOS logic process..|
|1TRAM Applications||1TRAM memory should find its way into embedded high-density memory applications. These include networking, DSP, graphics, and consumer applications.|
|Data Refresh||Because 1TRAM isn't a commodity DRAM, it isn't bound by JEDEC restraints when it comes to data refresh. Data refresh is a concern in manufacturing, so several factors must be considered.|
|Product Considerations||Several issues are key to producing an optimized product: an optimized 1TRAM layout boosts yield and acceptable TREF; and knowing critical 1TRAM parameters, and smart in-line control of them, can increase yield by up to 30%.|
Full article begins on Page 2
Data refresh and cell-layout issues must be addressed to optimally implement this space-saving alternative to SRAM technology. * As the system-on-a-chip (SoC) era marches forward, there's a pressing need to embed large amounts of memory onto a logic chip and make the resulting technology as flexible and cost-effective as possible. SRAMs don't require the overhead of refresh management, so there are always going to be small, distributed SRAM memory blocks on-chip.
Yet real-estate problems crop up with SRAM. This is where the relatively new technology known as 1-Transistor Random Access Memory (1TRAM) can be implemented to replace sizable SRAM blocks (see "What Is 1TRAM?" p. xx). Considerable knowledge is already available regarding 1TRAM's operation, cell-layout ability, sensitivities to process parameters, and in-line optimization procedures.
The commodity DRAM world dreads data refresh—the only new physical operation not present in SRAM and affecting 1TRAM operation—because the refresh spec is fixed by a JEDEC standard. It's also always a major challenge to satisfy this spec with high yield. 1TRAM isn't a commodity DRAM and it's not restricted to JEDEC constraints, though, so there's great flexibility in defining refresh time for 1TRAM memories. Because the concept of data refresh is widely misunderstood outside of commodity DRAM circles, and because commodity DRAM sees refresh with a totally different perspective, its use is heavily debated.
The 1TRAM is a special DRAM, and its memory cell is based on charge storage in a capacitor. Over time, charge leaks out of the storage node. The charge has to be restored periodically for the memory to operate properly. The designed-in time period of charge restore is called refresh time, or TREF.
A simple yet important point for 1TRAM refresh is that there's no single-set refresh spec associated with a process. Designers must consider several factors in conjunction when evaluating TREF: memory density (how many megabits in a product), amount of designed-in repair per megabit, highest specs for voltage and operating temperature, and desired memory yield. Each product in the fab will then have its own optimized TREF.
Figure 1 illustrates the concept of yield versus memory density. It clearly shows that for products with longer TREF, yield depends heavily on how many megabits of memory are embedded in a product. An acceptable TREF for a 2-Mbit product can significantly hurt the yield of a 20-Mbit product.
For mobile products, long TREF is desirable to reduce standby power dissipation. How can one improve TREF without sacrificing product yield? Build more or smarter repair schemes. A better repair scheme will permit the repair of more failed random bits, increasing yield and reducing product cost (Fig. 2). Putting in smarter repair allows TREF to be extended while still achieving an acceptable yield.
Keep in mind that refresh can have a complicated voltage dependence. Intuitively, one might think that a higher VDD increases the charge stored on the node, increasing the acceptable TREF. On the other hand, memory node leakage increases with VDD, which reduces TREF. Also, timing marginality of sense amplifiers can influence the measured refresh curve in any arbitrary way as voltage changes. In fact, we see that a refresh curve can shift to the right or left (depending on the design) as the testing voltage is increased.
Figure 3 shows a sample set of retention curves with voltage as a parameter. The slope for the main population of failed bits moves out by as much as a factor of three if VDD drops from 2.3 to 1.5 V. A higher VDD enhances the so-called retention tail—a few bits with the shortest retention time that fall out of the main slope of the curves. Such tail bits versus available repair actually determine each product's TREF spec.
It's imperative to screen out all such marginal bits and repair them during wafer-level test. Therefore, a guideline for an optimized voltage guard-band during refresh testing must be devised for 1TRAM products. Depending on the design properties, the guidelines should incorporate higher-than-nominal and/or lower-than-nominal guard-band voltages.
Similarly, TREF strongly degrades at high temperatures since temperature en-hances leakage. So, it's very important to guard-band temperature at wafer-level test to screen out weak bits and repair them. Smart layout of a 1TRAM cell can, in fact, both improve the main slope and suppress the tail distribution of the refresh curve while still keeping minimum cell size. Great care must then be taken when laying out the 1TRAM cell, as it will directly translate into higher yield and better power performance.
A closer look at Figures 1, 2, and 3 reveals some key relationships. Even though there are no refresh-failed bits in Figure 3 at TREF = 2X (even at high temperature and voltage), product yield still drops as compared to the TREF = X yield. That's because Figure 3 displays the refresh curve of one sample, while yield refers to results on many samples. So while the yield curve doesn't show a detailed distribution of bit failures versus TREF, it actually contains true statistical TREF information within all samples tested.
Several items are important to produce an optimized product. First, an optimized 1TRAM cell layout increases yield and acceptable TREF. Next, a correct test methodology is the key to optimizing repair capabilities and, thus, increasing final chip yield. Finally, understanding critical 1TRAM parameters and smart in-line control of those parameters can increase yield by as much as 30% for many products.
Every design has various specific sensitivities to process variations. A low-power product might fail testing if devices have low VT and leak more than the average spec. This low VT will only enhance the performance of a high-speed microprocessor with reduced standby-current constraints, though. Similarly, if a dense SRAM cell is laid out with a critical metal-1 pitch, any overexposure of metal-1 will cause shorts. But such overexposure will increase the performance of an ultra-high-speed SRAM with a larger cell and greater spacing between metal-1 lines.
In a different design, if a critical path goes through many via-1s and metal-2 lines, going to a "via-1 and metal-2" process becomes critical for high yield. The process line fluctuation might still be within wafer-acceptance-test (WAT) limits for the process, but it could adversely affect the statistical yield for different designs.
Semiconductor foundries usually run many products for various applications (and for different customers) in the same fabs. Using this economy of scale benefits their customers. That's why it's important to understand which critical variations in parameters influence customers' product yield the most. TSMC performed such studies for 1TRAM products across several generations of technology. We found consistent patterns of strong yield dependence on certain easy-to-control in-line parameters. Of course, such knowledge is experience-based and is closely guarded by every silicon manufacturer.
Implant process drift will affect 1TRAM yield. It can affect both leakage and read margin of the cell charge. Figure 4 shows how a particular parameter (in this case, a lithography parameter) also influences yield. The figure additionally illustrates the concept of retargeting for yield optimization. If range1 is set by the baseline, the lithography parameter process for a 1TRAM product under consideration can be retuned to fit in range2, increasing average yield. Our experience indicates that several mask-alignment overlays are more important for 1TRAM yield, but they don't influence logic yield.
In addition to retargeting or carefully monitoring critical 1TRAM yield-related parameters, a new type of control can be implemented in the fab. While WAT limit specs remain unchanged, certain WAT flags can be put in place, indicating that an important yield-related parameter is moving in the direction of a "danger zone." If the average parameter value is A and the WAT limit is 2A, the flag will signal when the line drifts to 1.5A. This lets engineers address such line drift early on, reducing the probability of 1TRAM yield degradation.
1TRAM FOR DSPS:
One of the key areas for 1TRAM application is digital signal processing. In fact, digital signal processors (DSPs) are emerging as the largest market adopters of 1TRAM. Since a DSP is a real-time processor, it needs to store code into fast memory for a speedy execution. In a DSP, memory is needed for fixed-cycle deterministic access. Therefore, only SRAM-style memory can be used, because in a conventional DRAM, random access is too slow, and only page mode access can be fast. Operating speed and memory density with 1TRAM are enough for the vast majority of DSP applications.
In addition to conventional DSP applications, baseband chips in 2.5G and 3G cellular phones require low power. This puts some constraints on 1TRAM designs. But power issues can be resolved by extending the refresh spec using specific repair techniques. In addition to data processing, those 2.5 and 3G chips should be able to handle graphics, MP3, and Internet protocols, which adds memory requirements to the silicon solution. Higher bandwidth requires a wide logic-to-memory bus, which points toward an embedded 1TRAM solution, as compared to standalone SRAM or DRAM.