VMware Storage Challenges

Since it burst into the data center, VMware storage challenges have been the critical limiting factor in how thoroughly organizations adopt it. VMware storage challenges cause organizations to buy more expensive storage systems than they should, requiring them to use multiple storage systems and layer in complex data protection hardware and software.

At the heart of VMware, or any virtual environment, is a clustered file system where every server/node hosting virtual machines (VM) needs to see so features like vMotion and Site Recovery Manager can function correctly. This file system must also deal with an “I/O Blender” of highly random I/O streams from multiple workloads running as VMs on multiple nodes.

Learn how to solve VMware storage challenges

Register for a live technical roundtable on the challenges of hypervisor file systems on May 4th at 1:00 PM ET / 10:00 AM PT

Do AFAs solve VMware Storage Challenges?

VMware was one of the best first use cases for a Storage Area Network (SAN), but the I/O Blender quickly exposed the performance challenges of hard disk drive-based arrays. The industry quickly moved to hybrid storage and all-flash arrays (AFA) to address random I/O issues. However, AFAs don’t solve VMware’s storage challenges; they hide them behind high-performance hardware.

AFAs have dedicated high-end storage processing, require a high-performance network connection, and use enterprise-class, solid-state disk drives. These components, plus egregious vendor markup, lead to a high acquisition cost that is often more than the cost of compute layer.
Using an AFA to scale your VMware environment is like cutting a watermelon with a golden sledgehammer. It’ll work, but there is a better, more surgical way to get the job done.

Do HCI solutions solve VMware Storage Challenges?

The other alternative to solving VMware storage challenges is hyperconverged infrastructure (HCI). HCI moves the storage system into software and onto the same physical hardware as the hypervisor. The problem is that this software is still separate from the hypervisor and often runs as a virtual machine (VM). While HCI vendors claim they eliminate the need for a SAN, a network still needs to accommodate storage traffic. Frequently, IT professionals will still dedicate a network to the storage traffic.

The burden of running as a virtual machine means that for adequate storage performance, the customer must buy more powerful processors in their nodes, higher-performance network connections between those nodes, and high-end flash drives to handle the random I/O patterns. In other words, the same problems as a dedicated all-flash array.

The Root Cause of VMware Storage Challenges

Solving any problem requires understanding its root cause. In the case of VMware Storage Challenges, the root cause is VMware. It is the one constant in all the methods IT uses to work around its storage problems. Specifically, the storage services built into VMware are minimal, and the potential solutions, AFAs or HCI, depend on VMware.

Snapshots are a good example. VMware’s snapshot technology must “stun” the VM before taking the snapshot. It also burdens the hypervisor significantly because it is inefficient at tracking the production and snapshotted blocks, which impacts overall performance. AFA and HCI vendors must work with VMware’s limited snapshot capabilities to enable their snapshots. Being independent means the storage solutions have different levels of visibility into what they protect.

Another example is deduplication. VMware’s developers didn’t build the technology into the core storage capabilities of the hypervisor, so customers must depend on bolt-on solutions to provide the functionality. Most HCI solutions can only offer deduplication with a significant performance penalty. While AFA solutions offer it, again, the customer is paying for the extra RAM and processing power to enable it.

The Ripple Effect of VMware Storage Challenges

VMware Storage challenges create a ripple effect in the data center, increasing costs and complexity. The lack of storage I/O scalability means customers add nodes to their VMware cluster long before they can utilize existing nodes fully.

The lack of integration between storage and VMware makes data protection and disaster recovery more complex. The lack of reliable, retainable snapshots and the lack of visibility into what is being snapshotted means that most customers depend on their backup application for recovery more so than the capabilities of their primary storage array. The result is that storage is often the most expensive component of the virtual infrastructure.

Solving the VMware Storage Challenges

Solving VMware storage challenges means fixing or replacing VMware. As the latest version of vSAN shows, fixing VMware’s storage capabilities through a never-ending series of bolt-ons leads to further inefficiency that requires customers to continue to buy faster and more expensive storage media, processors, and RAM.

The only alternative is to replace VMware with a data center operating system, like VergeOS, that more efficiently leverages storage resources and better utilizes computing and networking resources. VergeOS is a single piece of software that integrates storage, the hypervisor, and network functionality, enabling customers to use off-the-shelf commodity servers, storage, and networking while delivering consistent, high performance.

The VergeOS file system is inherently deduplicated, so features like IOclone can support thousands of snapshots or copies indefinitely without impacting performance. Read the latest VergeIO blog for a detailed comparison between snapshots vs. clones.

A VMware Exit at Your Pace

Replacing VMware does not need to be an all-at-once conversion. Organizations can start by using VergeOS as a disaster recovery solution using IOprotect. It has a significant advantage over array-based replication by providing a complete disaster recovery infrastructure, not just data. Then when IT is ready for a VMware Exit, they can gradually move production VMs to the VergeIO infrastructure. Users of those workloads won’t notice any difference other than they will perform better and be better protected.

Conclusion

To lower data center costs and complexity, IT needs a solution that fixes the problem instead of hiding it behind more expensive hardware and a never-ending parade of software bolt-ons. VergeOS is the fix. Its advanced file system is part of the same code base as its hypervisor and networking functionality. Customers can leverage existing hardware and see better performance and data resiliency while lowering software licensing costs.

Click here to take VergeOS for a test drive. In a matter of minutes, the team will set you up with your own Virtual Data Center so you can create VMs and see how easy it is to operate.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , ,
Posted in Article, Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,112 views
%d bloggers like this: