Everyone acknowledges that flash storage is faster than spinning disk, and most acknowledge that the closer to the CPU this storage is, the more it accelerates I/O performance. In addition, everyone agrees that the only thing currently faster than flash is RAM, and that the more RAM an application has the better its performance will be. As they say in court, “these are the facts, and they are undisputed.” The questions begin when one begins to examine the advantages and disadvantages of the different ways to use flash and RAM to accelerate VMs.
Flash first started appearing as separate devices and arrays that would connect to a physical or virtual server just like any other storage device would. The challenge with this approach is that I/O operations have to go through the PCI bus and storage network before they get to that device that was meant to accelerate performance. The industry then responded with server-side flash, which were devices connected directly to the server’s PCI bus, removing the latency from the storage network from the equation. The downside to this approach is that once one moves the storage inside the server, you typically remove the ability to share this flash with another physical node.
There are software solutions to leverage server-side flash. Typically these products perform read-only caching and occasionally write-through caching. This style of caching accelerates reads within a node, but writes are not accelerated. Also, if one uses VMotion to move a VM from one node to another, it will lose its caching logic during this operation, as the flash in each node is standalone. The VM’s data would have to requalify for flash used on the new node. The impact could be frustrating to users as they would suffer a return to hard disk performance while data is requalified.
PernixData FVP Software
PernixData’s first product, FVP, provided read caching and write-back caching, accelerating both reads and writes to any storage shared by the product. Where most products use only flash for acceleration, FVP clusters both flash and RAM — and clusters them across multiple nodes. This means that an administrator can VMotion a VM from one node to another without losing its cache. Plus, data can be replicated between hosts for fault tolerance. PernixData gives away a read-only version of FVP, called Freedom, that provides read acceleration of up to 128 GB of RAM. This product can then be upgraded to support write-back caching, flash, and other enterprise-grade features like fault domains. FVP runs in the VMware kernel, allowing it to offer all this functionality without having to use additional resources.
PernixData has now added a product called Architect for infrastructure analytics. It collects and displays a number of pieces of information useful for architecting and managing storage in a virtualized environment, such as read/write mix, block sizes, IOPS, throughput, latency and more. There are other products that offer pieces of this information, or that offers similar information for a specific vendor’s storage offering, but Architect’s unique position inside the hypervisor lets it collect more granular info on VM behavior and correlate this with information pertaining to any storage device — regardless of which brand of host or storage a customer is using. Like FVP, Architect runs in the kernel as well.
Read acceleration is good; write acceleration is even better, if those writes are protected from media failure. Flash is good; RAM is even better. It seems that PernixData offers a product that allows customers to have all this and be able to share it between hosts. In addition, they’ve also created an analytics product that gives IT Professionals insight into application behavior, allowing them to optimize storage performance. Each of these products can be used without the other, but the unique combination of both products should be extremely strong and a differentiation for PernixData.