Storage comes in many different flavors, as does the way in which hosts connect to it. There are various speeds and protocols available, and different options suit different environments. There are two primary considerations when choosing how to attach to storage: the requirement of the workload and the requirement of the business. While performance is one factor, the manageability of the storage fabric and the skillset of IT staff are also important considerations. If a protocol offers a certain benefit, but no staff members are proficient in its use, it is of little use after all.
Choosing a protocol (or set of protocols) can be a daunting task. Many arrays support a variety of protocols while others focus on a subset or even a single protocol, making the process even more challenges. The first choice that must be made is between block and file protocols. Do the workloads in question require direct access to the underlying disk they’re accessing? Or do they simply need to store files? The next decision involves existing infrastructure and IT staff: do you need IP storage or another storage networking technology such as Fibre Channel? Further, does the existing staff have the skills and knowledge necessary to manage the networking side? Eventually these requirements will all line up to form a solution, for example: IP storage using iSCSI for hypervisors and CIFS for user shares.
In general, mainstream storage systems support one of more of the protocols listed in the following chart:
Once the method of access has been determined, one must also decide how fast to access it. Obviously, “as fast as possible” is the correct answer. But price is always a factor, and faster is more costly. In an IP storage fabric, the array could aggregate 1 Gbps links, or it could use 10 Gbps. In the near future, it might even use 25, 40, 50, or 100 Gbps. In the case of a Fibre Channel network, the HBAs might connect at 4, 8, or 16 Gbps. The burning question is (as always): what does the workload require?
With flash-based storage systems, you’re more apt to begin to saturate network connectivity than was likely in the past. Even a few years ago, performance tests between SSDs and HDDs showed a major throughout difference, with hard drives achieving just over 100 Mbps of raw throughput and even the slowest SSD demonstrating over 500 Mbps. Today, as SSD technology has improved, throughput has become even more impressive.