DIMM-based Flash Storage

SSDs in the form of SAS/SATA disks and PCI-e cards are really fast, especially when compared with the relative turtle in the storage speed race: the spinning disk.  However, some SSDs still have latency figures that can measure into three digit microseconds, which is a lifetime for some applications, such as those that support high frequency trading.  This is where Memory Channel SSDs come into focus.

SanDisk UltraDIMM Flash Memory Module
SanDisk UltraDIMM Flash Memory Module

Lowest Latency Currently Available

As things stand, Memory Channel currently enjoys the lowest latency figures of any type of SSD on the market.  With read latency of just 150 microseconds and write latency as low as a miniscule 5 microseconds, these storage devices are a perfect fit for applications that demand incredible low latency and high throughput – up to 880MB/s for reads and 600MB/s for writes.  From an IOPS perspective, DIMM-based flash devices can achieve up to 140,000 IOPS for reads and 44,000 IOPS for writes – per device.  Moreover, these devices can be read from and written to in parallel, so multiple DIMMs can be leveraged simultaneously.

Standard Everyday Block

Even better, Memory Channel Storage presents itself to the computing environment as just another run of the mill block storage device.  In a way, it’s saying, “Nothing special here, folks!”  But, there are a few needs that must be met before things can be this easy.  First of all, the system has to have a BIOS that supports Memory Channel Storage.  Second, drivers need to be available to the operating system that intends to leverage this DIMM-based storage option.  Drivers are already available for Linux (RHEL, SLES), Windows Server, and VMware.  All in all, these aren’t exactly onerous requirements.

Plenty of Capacity to Meet Myriad Scenarios

At present, these storage DIMMs come in 200GB and 400 GB sizes and you can add more than one to a server.  There are all kinds of potential use cases beyond high frequency trading for this kind of capacity at such low latency.  Think VDI, for example.  The worst part about VDI is the boot storm or login storm.  Imagine a scenario in which you have a couple of Memory Channel Storage DIMMs in each of your hosts providing a massive and insanely fast cache for these kinds of events.  By the way, a single storage DIMM can provide 150,000 read IOPS and 65,000 write IOPS.

Of course, servers only have so many memory sockets, and each storage DIMM negates the ability to use that slot for RAM.  If your needs are particularly RAM heavy, consider the opportunity cost of that lost slot.  As with everything, there are potential tradeoffs to analyze.  However, on the whole, there is far more upside to the technology than downside depending on the application.