Virtualizing Mission Critical Applications Exposes New Storage Bottlenecks

IT administrators have reached a level of confidence with VMware and other hypervisors that make them more comfortable with virtualizing mission-critical workloads, like Oracle, SAS, and SAP. The goal is to gain the flexibility of a virtual compute infrastructure to make these organizations more responsive and reliable. These mission-critical applications though create problems for the storage architecture and expose new bottlenecks.

Virtualizing workloads like Oracle, SAP, and SAS increases pressure on the virtual infrastructure. Concerns like the “IO Blender” resurface and many organizations find that an all-flash array may not deliver the consistent performance these applications require.

Identifying the primary performance bottleneck for these applications is difficult. The data center today can apply plenty of compute resources to virtualizing these mission-critical applications. The network is up to the task thanks to 32Gbps FC and up to 100Gbps Ethernet. Storage protocols that use these high-speed networks are improving thanks to NVMe, which delivers much more efficient network transfers than traditional SCSI. Even the storage media, in the form of NVMe Flash, can handle the new IO load. Where the challenge appears to be, is in the internals of the storage system itself.

The insides of the typical storage system consist of the storage media, CPU resources to run the storage software, internal networking connecting the CPU to the storage media and external networking connecting the storage system to the compute infrastructure. As flash continues to improve in its raw performance and internal connectivity (NVMe), it exposes the storage software as the source of the next IO bottleneck. Vendors either need to completely redesign the storage architecture and re-write the storage software, or optimize it to run on hardware that is more efficient.

Many environments won’t push an all-flash array to the point where the storage software bottleneck is exposed, but environments virtualizing mission-critical applications into the existing virtual infrastructure expose the bottleneck caused by the storage software. The environments also can tie increases in performance to increases in revenue. Performance equals money for these organizations.

A potential solution to the software problem is to customize the storage software so that it runs on an FPGA instead of on an Intel processor. Putting an FPGA into a storage system is the equivalent of using a GPU for artificial intelligence; it allows the storage software to deliver the full complement of data services without impacting storage performance. The FPGA’s goal is to reduce the latency of the storage software.

The remaining processors in the storage system are not the top-of-the-line CPUs that other vendors must use, these processors are relatively low end and only need to perform essential management functions.

The result in a high-end virtualized environment is to enable the organization to scale to increase VM density while at the same time supporting mission-critical workloads.

In our latest LightBoard Video Vexata’s VP of Product Marketing, Rick Walsworth joins Storage Switzerland to discuss the challenges that virtualizing these mission-critical applications create. Then we discuss how their high performance VX-100 scalable storage systems leverage FPGAs to remove the storage software bottleneck, thus maximizing IOPS and bandwidth while minimizing latency.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , ,
Posted in Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,959 other followers

Blog Stats
  • 1,288,854 views
%d bloggers like this: