Over the last several years, the enterprise datacenter has started to change the way it looks at storage. Software improvements designed for Flash storage has allowed a shift in focus. No longer is the primary concern “Do I have enough storage?” Instead, the enterprise is focused on ensuring that applications have consistent performance. This problem is more difficult in a shared storage environment where a single workload can consume enough resources to negatively impact each other. This problem is one that can plague shared storage arrays.
With many applications accessing the same storage infrastructure they are forced into sharing the same finite pool of performance. If one system or application misbehaves and consumes an unfair share of the performance it leaves a smaller amount for the other systems to make use of.
One way to address this noisy neighbor problem is by simply adding more performance resources. In a traditional storage array with spinning media this means adding more drives. The addition of capacity to achieve performance requirements is a very inefficient use of resources. It also just seems extremely counter intuitive. Eventually, someone will want to use that capacity forcing even more drives to be added to the array for performance. The only one that wins in this scenario is the storage array vendor. This is a fundamental flaw of the traditional storage array, but the rise of Flash gave the array manufacturers a different option.
Many flash storage arrays aim to decouple storage performance from capacity by brute force since flash vastly outperforms spinning media. The problem with this brute force approach is it is fundamentally no different than a traditional storage array. It is just moved to a higher performance threshold before it is noticed due to the flash storage performance characteristics. In fact, with the vastly improved performance possibilities it may be hard to imagine a single application being able to consume a large portion of the performance. While that may often be true, the real power of an All Flash Array lies in the ability to pack a lot more data into less physical capacity. This is made possible by using data reduction techniques like deduplication and compression.
In other words, an All Flash Array will create a magnificently fast pool of high capacity storage for the enterprise to consolidate countless workloads onto. As workloads move onto this pool of capacity it is paramount that we ensure other applications do not negatively impact each other. Each application must be able to achieve the performance it needs when it is needed. That is where Quality of Service comes in to play.
QoS is nothing new in the enterprise datacenter. The network administrators have been using it for many years in an attempt to evenly distribute bandwidth on an oversubscribed network. Storage QoS however, is a much newer kid in town. Storage QoS is an attempt to ensure each application or system can have the IOPS, bandwidth, and latency required. In a share storage environment one thing matters –consistency! The last thing an application wants is for a process to run very quickly one minute and a few minutes after take longer. This type of inconsistency makes life for the application response time a nightmare where users are unable to plan accordingly.
Every All Flash Array needs an implementation of Quality of Service at the most granular level the array supports. Everything we have been taking advantage of in the compute virtualization world such as resource reservations, workload isolation, flexibility, and increased efficiency need to be integrated into the storage array. Sure, we can sort of do this today with VMware Storage I/O Control, but that isn’t enough for the enterprise of today and beyond. The ability to ensure storage performance for every application across every compute type is going to be a base requirement of the storage array, if it isn’t already.