ULLTRADIMM-

What Is Memory Channel Storage?

Businesses of all sizes have one thing in common: the volume of data they are creating and accessing is growing at an unprecedented rate. In an effort to keep up with this data growth we have seen a significant improvement in compute performance. The need to retain this data has also led to an explosion in hard drive capacities, performance improvements driven largely by flash, and software for reducing data footprint. Prior to the advent of flash there was a massive gap in the performance of compute and storage. The vast improvements in performance from flash have done a lot to close the storage performance gap. However, it still exists today. As computing power continues to improve, application performance increases lead to the need to access data more quickly than ever before.

Performance CPU HDD Flash

CPU : Data StorageThe architecture of a modern compute system has the application, system memory, and processing power coupled together. However, the data accessed by the application is disconnected from that system. The CPU has direct access to the system memory, but the data stored on a drive is accessed via a much slower medium like SAS, SATA, or Fibre Channel. The most recently used application data resides in system memory for high performance, but memory is limited in capacity due to geometry limitations. This disconnected architecture leads to long round trips and increased latency as data is accessed from the storage by the application. When latency increases, application performance suffers leading to a loss of productivity for the business. This design has worked well in the past, but the ever-widening storage performance gap is making it less and less viable. In order to maximize CPU efficiently, critical data can no longer be so far away from the memory subsystem. One solution to this problem is Memory Channel Storage.

CPU App Memory: FlashMemory Channel Storage, or MCS, aims to improve system performance by keeping more data in the memory subsystem. The Memory Chanel architecture is a design for both storage and memory. At its core, it allows non-volatile flash storage to be directly connected to the processors of a server. By coupling a large amount of the data with the processor, system memory, and applications, MCS can greatly reduce latency induced by long trip times. The highest bandwidth bus in a server is the memory bus, which also happens to be the lowest latency, as well. By directly connecting non-volatile storage to the memory bus, Memory Channel Storage has created a new architecture. In this new design, NAND flash storage is block addressable without the latency of the I/O and storage subsystems. A memory controller is designed to handle high speed data accessed in a massively parallel environment. Memory Channel Storage DIMMS make use of this parallelism for read and write acceleration. The nature of a NAND cell is not conducive to presenting an array of Logical Block Arrays (LBA) to the host. Unlike a traditional hard drive, individual sectors cannot be overwritten. Large chunks of data must be erased at a time in a NAND cell rather than individual bits. An additional component, called the Flash Translation Layer, must be added to hide the memory cells and present only an array of LBAs to the host. In a MCS DIMM the Flash Translation Layer is distributed across all DIMMS in the server which allows for the highest level of performance (a process called parallelization). The combination of the of the memory bus parallelism and the distributed Flash Translation Layer allow for reads and writes to happen across many MCS DIMMs at a time.

A Memory Channel Storage device uses the standard DDR interface on a normal RDIMM form factor. It plugs directly into a normal DIMM slot on a server just like DRAM would and supports transfer rates from 800MT/s to 1600MT/s. In order to use Memory Channel Storage a server will need to have support in the BIOS which will allow the DIMM to be presented as blocked storage. Since MCS uses NAND flash, it is very economical when compared to DRAM, however it does not perform quite as well. At the end of the day, it’s still storage, not memory.  Currently, there are two MCS devices on the market with support for several versions of Microsoft Windows, Redhat and SUSE Linux, and VMware ESXi. In addition, it is certified for VMware’s VSAN as a flash tier.

Historically, enterprise storage has been hampered by architectural and technical limitations. These limitations have severely hurt application performance. Technology like NAND flash has opened up a new era of performance in the storage world. The Memory Channel Storage architecture brings forth a new way of looking at storage allowing for countless amounts of storage to be accessed at near DRAM speeds. MCS changes how we think of flash and system memory and is poised to take the disruption caused by flash to the next level.