Virtualization creates a variety of unique factors that cause storage bottlenecks. An increase in application density, coupled with high volumes of randomized read and write requests, for example, place an increased burden on shared storage resources that ultimately has an adverse impact on application performance.
This condition only becomes exacerbated as more hosts and virtual machines (VMs) are added to the environment, resulting in yet more storage IO contention for a limited amount of storage performance resources.
One way to overcome the degradation of application performance in virtual environments is through the ubiquitous use of high-speed server resources, like flash and RAM. By putting storage performance into the server tier, companies can accelerate virtualized application workloads and extend the useful life of existing storage assets.
This enables organizations to avoid the expense, risk and hassle of refreshing storage hardware. In addition, new decoupled architectures enable businesses to increase VM density, because more storage IO is available locally from the server. Consequently, as these environments scale-out, there is a significant reduction in the server hardware (CPU, RAM) and corresponding virtualization licenses required to support new VM application workloads.
Storage Acceleration – Designing IT Right
To obtain all the benefits described above, IT planners need to be cognizant of various design criteria when evaluating storage acceleration software products. For example, how is the software implemented? Does it seamlessly integrate into the hypervisor? Does it need to be installed as a driver on each individual Guest VM or does it need to run as a virtual storage appliance (VSA) on a dedicated VM?
One must also look at hardware requirements for storage acceleration. What type of server resources can be used (e.g. flash or RAM)? What type of storage is accelerated (file or block)?
In addition, what types of applications are accelerated? Can you optimize only read intensive applications, or is there also a benefit when running write intensive workloads (or a mix of both)?
Finally, how well does the storage acceleration software layer complement VM operations and tools, like vMotion? Does it work with and is it transparent to existing virtualization software tools? Or is there an impact to the performance of the underlying application, the storage network or the flash device being utilized following a vMotion operation?
There are a number of storage acceleration software offerings available in the market; however, some better align with the above criteria than others. Here are some basic criteria to consider when evaluating various solutions:
Types of VMs Accelerated
To get the most value from a storage acceleration solution, IT architects should explore what type of IO is accelerated – read, write or both. This, in turn, dictates the types of applications that can be optimized.
Solutions that only accelerate reads might be good for point solutions like web servers or streaming media, but they don’t deliver a true decoupled architecture for end-to-end storage acceleration. In other words, they are not an enterprise wide platform for universally accelerating application workloads, which also includes write intensive applications like virtual desktops and virtual databases. Therefore, to maximize the return on a storage acceleration purchase, businesses should look for solutions that optimize both read and write IO workloads.
Seamless VM Operations
Many activities require VMs to move from host to host. For example, VMware’s server vMotion and DRS (Distributed Resource Scheduler) utilities enable VMs to migrate across hosts as additional server resources are needed to support application workloads. In fact, many organizations leverage DRS in tandem with VMware High Availability (HA) clusters to pool and aggregate server resources so they can be dynamically shared amongst VMs. Therefore, it is critical for the storage acceleration software to work seamlessly with these features to help ensure consistent application performance in highly dynamic virtualized environments.
An ideal way for storage acceleration software products to support VM mobility is through a clustered architecture, where any host can remotely access the high-speed server resources on any other host in the cluster. In this manner, if a VM migrates to another host, its cached application data will always be locally available in the shared pool of server resources. This minimizes latency and helps to ensure consistent application performance.
Non-clustered storage acceleration solutions handle vMotions by migrating “hot files” (or the entire data footprint) from server “A” to server “B” during the process. This takes precious time and eats up valuable network resources and as a working dataset can be several GBs in size. While this may be workable for occasional server vMotion activities, it does not function well with DRS, HA and other VM operations simply because it requires too much data movement. If a cache must be evicted and rebuilt during the vMotion process, it will not be seamless to the application workload(s), and application performance will not be fully optimized.
An additional benefit of clustering server resources is that it builds fault tolerance for writes into the acceleration environment. If a host fails before write IOs can be destaged to a storage device, data loss will occur. Clustering overcomes this obstacle by synchronously replicating write data across multiple hosts. In doing so, both reads and writes can be accelerated with no risk of data loss in the event of a device failure.
Fault tolerance is especially needed when RAM is used for server-side storage acceleration. That is because RAM is volatile, meaning data does not remain persistent in memory when the host shuts down. So, for RAM to be a viable media for IO acceleration, data needs to be synchronously replicated across clustered hosts to protect against data loss when a host reboots or fails.
Ease of Deployment
Today’s “forever on” IT infrastructures require technologies that can be integrated without requiring a large maintenance window. So one of the first things that data center planners should look for in storage acceleration software is whether it can be quickly implemented with no disruption to existing VMs, hosts or storage. Ideally, the solution should install in a matter of minutes so that it will be minimally disruptive to business operations. It should require no changes to VMs (i.e. guest agents), it should work with any flash or RAM within the server (ie server hardware agnostic) and it should seamlessly interoperate with any storage resource (block, file and local attached) without requiring a reboot of the host or a change to any configured network mount points.
There are typically two ways to deploy storage acceleration software – as a virtual machine, or inside the hypervisor. The latter has distinct advantages, particularly in terms of performance. When inside the hypervisor, the storage acceleration software does not have to contend with other VMs for access to the hypervisor’s resources. In addition, deployment inside the hypervisor avoids fault tolerance issues that can occur when the acceleration software is run at the VM level. But, it is important to explore how the software is employed inside the hypervisor. If, for example, the storage acceleration software vendor utilizes a private API to integrate with the hypervisor, there is the risk that that API may not work with a future release of the hypervisor software. That’s why it is important for the storage acceleration software to use public APIs that are directly supported by the hypervisor vendor. This helps to future proof the storage acceleration solution and mitigate the risk of software incompatibility issues that can cause performance disruption.
It’s also useful to explore how well storage acceleration solutions integrate with hypervisor management tools, like vCenter. When all command and control functions are handled using common management tools (e.g. native vSphere web integration), virtual administrators can continue to use the toolsets they’re already used to working with.
When storage acceleration software is implemented in the data center, it is possible for businesses to significantly reduce application latency on virtualized workloads, lower storage costs, maximize storage utilization, and significantly increase VM density. This helps organizations get a higher return on their investments in virtualized infrastructure.
PernixData FVP software achieves the above benefits, while satisfying all of the decision criteria listed above. The product integrates directly into the vSphere hypervisor using public APIs certified by VMware. Once installed, you can cluster any high-speed resource (RAM and/or flash) to accelerate reads and writes to file, block and/or locally attached storage.
By clustering RAM and flash resources across all hosts, FVP helps to improve the utilization of server investments, enhances infrastructure resiliency and allows for server vMotion activities to take place without hindering application performance. In addition, by accelerating both read and write workloads using high speed server resources like flash and RAM, infrastructure planners can remove storage IO bottlenecks in the data center and continue scaling out their virtualized environment to meet business demands.
PernixData is a client of Storage Switzerland