NAND and NOR flash memory aren’t the only non-volatile memory (NVM) technologies around, although they are dominant at the moment. They have limitations such as read/write speed and write lifetime. Other NVM technologies like MRAM continue to improve and are challenging flash memory in a number of applications where persistent memory is being used.
I talked with Spin Transfer Technologies’ CEO Tom Sparkman about how the company is enabling MRAM to take on SRAM and DRAM in applications such as AI, IoT 5G and data centers.
Tom Sparkman, CEO, Spin Transfer Technologies
What are the advantages of MRAM vs. other non-volatile memory?
That question will benefit from taking a broader look, as there are actually two pieces to the answer that are critical to MRAM’s success.
Let’s look first at NVM flash memory. There are substantial challenges getting flash memory below 28 nm. This is significant because sub-28 nm is the reality for embedded applications, both today and going forward. MRAM has the advantage of already being below 28 nm, as well as being faster and lower power.
Now if we look at other emerging technologies like 3D Xpoint memory, MRAM provides a strong performance advantage and is looking like it will deliver greater endurance as well.
Why is it that current technologies, such as SRAM, DRAM, and NVM flash, are hitting their limits?
There are different technical reasons for each of these hitting their limits.
SRAM hit a major inflection point in scaling at around the 28-nm node. SRAM bitcells for many years were in the 100-120 F-squared range. But around 10 years ago, SRAMs stopped scaling as fast as the surrounding logic, and bitcells grew to 200 F-squared and more. This means SRAM is taking relatively more area as process geometries shrink.
And it’s getting worse because SRAM has a hard time dealing with FinFET. At advanced nodes, the relative size of SRAM is now growing much more rapidly, at a factor of four, five, or maybe even 10 times. So that 200 F-squared is becoming 500, 1,000, or even larger F-squared. What is really interesting about the timing of this is that it’s happening at a time when the world has decided we want more memory close to the microprocessor, not less.
If you look at applications such as artificial intelligence, self-driving cars, and big data, the way to get the massive amounts of compute needed is to have a very fast microprocessor next to a very large array of memory. Currently that memory is SRAM, which as we’ve discussed, struggles to support these advanced nodes, leading to increased cost. As this need grows, it’s going to quickly become cost-prohibitive to use SRAM. So that’s where we see the opportunity for MRAM, which provides size and power benefits over SRAM.
We’ve already discussed the limitations of NVM flash, but I think it’s worth expounding on. The reason NVM flash can’t go below 28 nm is because there’s a physical limitation in being able to store enough charge to make it work. MRAM is projected to be able to shrink as far as 7, or maybe even 5 nm before there’s any concern with its ability to get smaller.
That leaves us with DRAM, which is the dominant memory in most computer applications. After chip memory, it’s the main memory any computer system uses. It has much higher density than SRAM—we’re talking megabytes versus gigabytes.
The reason it’s so widely used is threefold. First, it’s in the 12-ns range, so it’s fast. Second, it’s very small and dense, providing around 6 F-squared numbers in the 4 to 8 gigabyte range. Finally, its endurance is practically unlimited.
So you might be asking yourself: Why would anyone want to replace DRAM? Well, it turns out there are a couple of challenges it presents. DRAM is extremely power-hungry, and as a result, puts off a lot of heat. You’ll hear Google or AWS putting data centers on top of mountains, next to rivers, and in cold climates. The reason they do that is because the air-conditioning bills are so high. In fact, about a quarter of the power of a data center is attributable to DRAM.
The other key mark against DRAM is that it’s a volatile memory. When you turn the power off, the data disappears. Companies spend significant resources, both in time and technology, on making sure the data isn’t lost if you have a power interruption—even one as short as a microsecond. It just so happens that MRAM inherently addresses both of these challenges.
So if MRAM addresses the two major challenges of DRAM, why isn’t everyone switching? What’s the catch? Current MRAM technology isn’t as fast as DRAM; the cell size is larger and the endurance isn’t anywhere near what existing DRAM has.
As I hear you say “current MRAM,” it sounds like STT has found a way to address MRAM’s challenges.
You’re exactly right. We’re not ready to go into the details of how the technology works, but we have what we see as three core pillars that really address current MRAM challenges in speed, size, density and endurance.
From an endurance perspective, most native MRAM is somewhere around 108 cycles. This sounds like a big number in the computing world, but it’s really not. To give an example, 108 in a typical high use SRAM application would wear out in days, maybe months. To be viable, MRAM needs to get to 1013 or 1014. We’ve invented what we call an endurance engine that takes advantage of the fundamental characteristics of an MRAM to boost the endurance by five to six orders of magnitude. And what’s really interesting is because it’s exploiting the fundamental characteristics of a magnetic tunnel junction, or MTJ, it will work on MTJs from anyone.
The other big breakthrough we’ve developed is called the Spin Polarizer. The Spin Polarizer is a change to the magnetics of MRAM, but it’s also a ubiquitous technology so anyone can use it. This is important for two reasons.
First, it changes the economics of MRAM because it improves the efficiency of the MTJ by about 50%. “Efficiency” is one of the key metrics of the MTJ and is a ratio of the retention (quantified by the parameter Delta) over the critical write current. If this benefit is taken in write current, then for any given retention level, you can reduce the write current and thus the bitcell by 50 percent. That’s huge because if the array is 30% smaller, that’s 30 points of gross margin in a low margin business. This can also enable a major improvement in speed—going from 20 ns, say, to well under 10. Conversely, the benefit can be taken as an increase in high temperature retention, which is a critical problem in meeting automotive-grade applications.
The third major technology we have is the ability to do 3D magnetics as well as multilayer; so, more than one bit per cell for MTJs. This is how we solve the size problem, allowing us to go from 30 F-squared to 5 F-squared.
Together, these three technologies take an existing MRAM you’d find on the market today, make a few additions to it with the endurance engine, the Spin Polarizer, and 3DMLC, and allow it to start competing with DRAM.
What applications (5G, IoT, AI, etc.) and devices (smartphones, in-home assistants, etc.) are driving the need for new memory technology?
That question really shows that you understand the market. The human race seems to have an absolute insatiable desire for memory. As soon as our current hunger is met, we decide we want more. We saw this first when we wanted to store audio, then video, and now big data.
All of these emerging applications that you mentioned are heavily memory-centric. Where this gets a bit tricky is self-driving cars and AI. Not only are they large consumers of data, but they’re also large consumers of rapidly processed data. So not only do you need a large amount of memory for storage, but you also need to have a large amount of memory that’s next to the processing unit. This is driving demand for both SRAM and DRAM as well as for NAND, and that’s why you see the suppliers of these technologies doing well. Accomplishing all of these fancy, wonderful things requires a ton of memory.
From a manufacturing perspective, there are some added costs involved in making MRAM. What are some of those costs? How will foundries make their money back?
It turns out that it’s actually cheaper to make MRAM than embedded flash. There are some specific machines that foundries would need to buy, but they’re not abnormal. And this is really the beauty of the semiconductor industry and what has kept it growing for so long. As an industry, we’re not afraid to change direction when a new technology that’s less complex or works better comes along. That’s why all of the major foundries have announced some level of commitment to MRAM. MRAM is cheaper and of course works, so I think that’s going to accelerate the adoption of MRAM rather than hinder it.
Tell us about the benefits of MRAM over SRAM and DRAM.
For the benefits of MRAM over SRAM, it really comes down to size and power. MRAM could be as small as one-tenth the size of SRAM and use a third of the power. At scale, it really becomes a bit of a no-brainer to start looking at MRAM.
The value prop compared to DRAM isn’t quite as obvious. MRAM might get a little smaller than DRAM, but not significantly so. It’ll also be probably 20% more power-efficient, which is significant. But the biggest factor is that DRAM is volatile where MRAM is nonvolatile. What that means to system architects can’t be overstated. It’s going to change the way computers are designed because they’re not going to have to worry about a power interruption. The systems will be much more streamlined, and they’ll be faster because they won’t need to build in redundancies against power interruptions.
Why has adoption of MRAM remained slow? What unsolved challenges still need to be addressed?
The simple answer is MRAM today doesn’t meet the performance levels of SRAM and DRAM. And that has proven to be a very difficult challenge to overcome. More than half the employees at STT have Ph.Ds. So we’ve had a lot of very smart people working for a long time to figure out how to overcome these challenges.
That’s one reason we may not be as well-known as others in the industry. We made the conscious decision to try to get these questions answered before going to market. Now we’re really close and are having a lot of very good conversations with companies whose names you’d know but I can’t talk about.
What are some promising applications that would benefit from MRAM?
I’ll give you a bit of a cheeky answer first and then elaborate a little bit: Everything. The reason I say that is because the more memory we can put next to the microprocessor, the bigger we can make the cache. That has a huge domino effect. When you expand the performance, which is what expanding the cache does, then the performance of microcontrollers goes up, enabling the next generation of microcontrollers and improving access times dramatically.
This advance is what’s enabling us to do things like AI that we couldn’t do before, so it’ll be fun to see what new technologies MRAM will open up.
Recently named CEO of Spin Transfer Technologies, Tom Sparkman has nearly 35 years of experience across medical, automotive, semiconductor, and wireless technologies. His leadership roles include CEO of Samplify Systems, a startup delivering mixed-signal semiconductor and ultrasound solutions, where he raised over $25 million in capital. Additional career highlights include 19 years of various executive positions at Maxim Integrated Products, general manager and senior vice president of Worldwide Sales roles at Integrated Device Technology, and most recently, general manager and senior vice president, Worldwide Sales, at Spansion Inc., prior to its merger with Cypress Semiconductor. Mr. Sparkman holds a Bachelor of Science in Electrical Engineering from the University of California at Berkeley.