frustrated-data-center

Understanding Virtualization & Storage Challenges

As servers continue to get virtualized and new desktop virtualization programs are launched, organizations face many challenges in addressing those needs. Storage is often the biggest bottleneck to performance. Simply adding more and more hard disk drive (HDD) spindles without considering the cost and efficiencies isn’t a sustainable answer for most companies.

This chapter takes a look at the challenges you face when you try to balance competing forces that can seem to be incompatible.

Understanding the I/O Blender

Meeting the challenge of increased demand for storage may sound simple on the surface, but in reality, you need to be aware of a number of competing factors. Besides the need for additional capacity, you also need to consider costs and required performance levels that are more challenging in a virtualized environment. Creating the proper balance between these varying interests requires an understanding of how your system is used and what this usage means in terms of the mix of I/O (input/output) levels you need to support. As you deploy more virtual machine instances, the highly random and mixed I/O for each of the virtual machines compete for storage resources — which leads to unpredictable performance. Many call this I/O Blender effect.

Traditionally, increased demand for storage capacity of performance has been addressed by adding hard disk drives and software, causing storage arrays to become complex (think redundant array of independent disks [RAID] groups, path management, and so on), bloated, and unnecessarily expensive. While the basic architecture of enterprise storage arrays has changed little over the past 25 years, the way organizations use that storage has changed. IT organizations are increasingly consolidating their compute and storage infrastructure by using virtualization technology and cloud-based applications that can be shared by thousands of simultaneous users, therefore changing I/O patterns radically.

Quite simply, different types of usage place very different I/O performance demands on your storage systems. Consider two different scenarios that illustrate this point in a virtual desktop environment:

Call center workers:In a typical call center, you may have hundreds or even thousands of users who all use identical applications on a desktop PC or thin client. Users have little or no opportunity to customize their display, and their individual systems use very little I/O overhead in transferring data to and from each workstation. When you put all these together, though, the load can create storage challenges that directly impact user experience and customer satisfaction.

Knowledge workers: Users who need to use many different applications throughout the workday and whose value to the organization is based on creativity need fast access to a wide variety of applications and data at random times. Clearly this type of system access demands much higher I/O performance in order to maintain worker productivity levels. Infrastructure performance directly impacts these workers’ productivity.

Although these examples only show the extremes of I/O performance demands, they do demonstrate that you need to understand the differences in system usage before you can make useful choices in storage system arrays. With lower I/O demands, the first example wouldn’t require nearly the I/O performance of the second example, so the balance of capacity and performance needed for the two examples is quite different.

Examining Server Virtualization Challenges

Delivering IT services today isn’t what it used to be. Server virtualization has revolutionized application services while massive data growth continues to challenge infrastructure and operational processes. It’s little surprise that recent research shows that increased use of server virtualization remains a top priority for many organizations.

Server virtualization is a process that allocates resources on a physical computer to different applications or users, so that each virtual server functions as though it was an actual computer — rather than multiple isolated physical servers hosting each application. Users are unaware of the physical location or even the identity of the actual physical servers. In most cases users aren’t even aware that their applications are running on virtual servers. Server virtualization leads to increased operating efficiency because fewer physical components are needed to handle the demand for individual servers.

Server virtualization creates challenges by combining mixed workloads that stress storage systems. In a successful server virtualization installation, you must provide for many different needs such as block and file storage, high performance of throughput and I/O while maintaining low latency, high availability, data protection, fast restores, long-term retention, and so on. Addressing all these needs usually means deploying and managing a number of different purpose built storage systems, while keeping up with capacity growth. These requirements drive up both equipment and operational costs and are part of the challenge you face in successfully implementing server virtualization. There must be a better way.

Looking at Desktop Virtualization Challenges

Virtual Desktop Infrastructure (VDI) is a growing trend aimed at reducing costs and administrative overhead while improving reliability and security of users’ desktops. Successful VDI implementations require several factors, including reasonable costs and good performance from users’ perspectives.

Desktop virtualization is a process that separates the desktop environment that users see from the physical hardware. In effect, users see a desktop and applications that are running on a server rather than on a PC. One big advantage to desktop virtualization is that less powerful (and often less expensive) devices are needed at each workstation.

Achieving performance for virtual desktops while maintaining costs at levels similar to traditional desktops requires careful planning and architecture as well as an understanding of how to optimize the most commonly encountered bottleneck – storage.

Storage is often cited as the item most often responsible for performance success or failure. Storage generally also has the largest impact on total costs. Achieving good cost and performance levels requires a storage architecture optimized for VDI. The issues relating to storage can be categorized into three areas:

Storage capacity: The requirements for VDI storage have a significant impact on the choice of a system. For example, the use of layered images can dramatically reduce the real storage requirement by keeping only one copy of common data. Other technologies such as thin provisioning, deduplication, and other features may have an impact as well

Storage performance requirements:The specific mix of VDI desktop types and usage patterns (think boot and login storms) determine how many I/Os per second are required and how performance scales with the number of concurrently used virtual desktops.

Administrative activities:Administrative activities include what actions are needed, or at least useful, in various traditional functions such as provisioning (adding a new virtual desktop), backup, restore, virus scanning, security, and so on. You want to examine each of these issues, looking at how they’re currently handled, who’s responsible for each issue, and how these responsibilities should change with VDI.