In today’s marketplace we have three different mainstream and mature memory technologies: SRAM, DRAM, and Flash. While we have seen these technologies evolve to allow for higher capacity and better performance the underlying physics have remained largely static. Each of these mainstream technologies have different use cases aligned to its relative density, volatility, efficiently, and performance. SRAM is largely used in embedded devices such as automobiles, appliances, or as processor cache. The limited use cases of today mean DRAM and Flash are far more prevalent in systems. DRAM is extremely fast offering nanosecond level latency and unlimited data endurance, but these benefits come at the cost of cell size and volatility. Flash, specifically NAND, has latency much higher than that of DRAM and a limited endurance which allows for non-volatile cell and lower cost. The gap between latency, endurance, and volatility of DRAM and NAND is very large. What is needed is something in between the two which can provide the best of both worlds.
As we can see above, both DRAM and NAND have various strengths and weaknesses in four key areas. Latency, or access time, is actually determined by the interconnect architecture used between the memory technology and the CPU rather than the memory itself. In this regard DRAM has a much lower latency given its proximity to the CPU. On the other hand capacity, volatility, and endurance are linked directly to the memory technology itself. DRAM is volatile memory meaning when power is lost the information contained within it is lost. It is unlikely we will see any new memory technologies be volatile. Capacity on NAND is vastly superior; however manufacturers are reaching a complexity wall in making NAND cells any denser. What is needed for a next generation memory is something combing the non-volatility of NAND without the endurance issues. It also needs to overcome the complexity wall in NAND scaling for larger capacities. On top of all of this it also needs to be faster and cheaper.
One of the technologies being talked about to bridge the large gaps between DRAM and NAND is called Cross-point architecture. This type of architecture refers to a 3D scalable resistive memory element formed from layered wires running perpendicular to each other. Sandwiched between each layer of wires are vertical columns which connect the crisscrossed wires. This structure is repeated over and over again to form a grid similar to that of prison doors stacked next to one another. Inside each vertical column is some kind of memory cell which is used to store bits of data. This style of architecture can be used with many different types of memory cells including RRAM, Phase Change Memory, Spin Torque Transfer Memory, and many more.
This type of non-volatile memory stacked crossbar architecture leads to many benefits. First and foremost is each memory cell can be accessed independently of one another. This leads to the ability to access large volumes of data at the same time. Since we’re talking about latency we know that the limiting factor is going to be the interface between the memory cells and the CPU. This is no different with the cross-point architecture. Since this type of design will allow for pushing far greater volume of data than anything on the market today it will require a PCI Express interface or something entirely new.
Cross-point architecture in itself does not increase the endurance of data stored within its memory cells. However, depending on the type of memory cell, it can assist with its endurance limitations. For example in Ferro-electric memory cells, which uses magnetic fields to control data, the distinct separation of cells would lead to less magnetic field interference. Lessons learned from NAND Flash influence cross-point, which is why each memory cell is individually addressable limiting any write endurance problems the memory cell may have. Volatility is completly governed by the cell as is individual cell capacity. Cross-point, by virtue of being 3D and stackable, allows for greater density of cells. The array efficiency allows for a lower cost made even lower with a small memory cell.
The blurring of lines between DRAM and NAND is allowing for an interesting possibility. Rather than a replacement for system memory or data storage it’s possible we have a third type of memory. This type of memory could be addressable directly by the CPU in the way that DRAM is and simultaneously block accessible by applications. This is really a new member of the memory hierarchy between the system memory and the data storage.
The improvements cross-point architectures bring to the memory array will allow for vast changes in the compute environment. Application data would be able to be accessed via this new memory at 1000x times the speed of NAND flash as block storage. As our lives continue to be connected and the advent of the Internet of Things data is becoming one large sharable database. In order to continue providing a rich user experience, data must be accessible faster than ever before. The promise of Big Fast Data can be realized when data is accessible at near DRAM speeds. A cross-point memory design will help unlock the potential for storage-class memory. This blurring of lines will fundamentally change how we view data in the years to come.