It’s time to “VMware” Storage

Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.

Hypervisors changed everything; suddenly, workloads running on the same server were effectively fault isolated from one another. They could be run together on the same server and if one failed it would not impact the other workloads. These workloads were also portable; they could be moved between physical servers running a hypervisor with relative ease. Most recently, these workloads can now have quality of service parameters applied to them so that no single workload can dominate an entire physical server and cause the other workloads to be starved for resources.

Storage is now in the same state today as servers were five or six years ago. It is difficult for a storage system to be all things to all workloads. Some are better at high performance, some better at consistency and cost efficient performance, and still others are better at long term and very cost efficient archiving.

As a result, the modern data center is a hodgepodge of different storage systems. It is not uncommon at all for a data center to have a storage system dedicated to a virtual desktop infrastructure (VDI), two or three storage systems dedicated to the virtual server infrastructure, or one or two systems dedicated to stand alone business applications like MS-SQL, Oracle and Exchange. There is also often a storage system dedicated to user data like Word, PowerPoint and Excel. Finally, there is a storage system dedicated to store the ever-increasing unstructured data set being generated from surveillance cameras, sensors and other Internet of Things type of devices.

Each of these workloads has very unique attributes in terms of storage capacity and performance. Storage systems designed for one of these workloads have a clear advantage in terms of either cost or performance in comparison to a more general purpose system that tries to manage a mixed workload environment.

Is Storage Consolidation A Lost Cause?

IT professionals have two options when it comes to solving this problem. First, they could jump into hyper-convergence with both feet, replacing the entire storage (and server) infrastructure with a hyper-converged architecture that consolidates storage and server resources together. While this simplifies management, it is a significant “rip and replace” of the environment. And hyper-convergence has its own issues when it comes to managing the workload mixture described above.

Change, or rip and replace, is difficult for the data center. It is expensive and it is an unknown. As a result, most data centers choose the other option: continue to build out the existing storage infrastructure as the workloads demand. For example, many data centers are deploying all-flash arrays so that they can meet the demands of a highly dense virtual infrastructure or an object storage system to cost effectively store Internet of Things data.

While this second “strategy” layers in another tier of storage silos, it does meet the specific performance and/or capacity demand of the given workload. It also requires no change to current IT processes and procedures. This strategy does however increase the operational and capital expense of the overall storage infrastructure. Today, IT organizations are brute forcing their way through these problems, but it is reasonable to assume that this approach won’t work long term, and may already be costing the organization more than they can afford to spend.

To keep pace with the new agile data center, storage does need to become more like the hypervisors it is complementing. To accomplish this, a new storage management paradigm is required: data mobility. Data mobility solutions need to meet five basic requirements: complement existing storage, support all bare-metal operating systems and hypervisors, provide unlimited but independent scaling, provide application level quality of service, reduce management overhead and realize an immediate return on investment.

The Five Requirements of Data Mobility

1 – Complement Existing Storage – Data centers have already made a large investment in storage resources. They need to make sure that the investment in these assets is fully realized. These storage assets are optimized for specific use cases and they have well vetted data services that customers are comfortable with. The missing link is data mobility, so that workloads can be moved between these various systems as the performance or capacity profiles change. In other words, data mobility extends existing capabilities, not replaces them.

2 – Broad Platform Support – While some vendors are moving to the virtualization of volumes – VMware VVOLS for example – data mobility solutions can’t be limited to a single hypervisor or operating system. They need to transcend these environments so that multiple hypervisors, operating systems and even bare metal servers can be utilized and data can move seamlessly between them.

3 – Independent Granular Scaling – Storage infrastructures scale along two axes. First, they scale by capacity and, second, they scale by performance. But they seldom scale by both axes at the same time. A data mobility solution would allow performance and capacity to scale independently of each other. In terms of performance, the solution should be able to leverage every form of storage from RAM or PCIe SSD in the server to all-flash or hybrid arrays on the storage network. At the same time, capacity arrays could be added when needed to store old data that has not been accessed for some time or where performance or access is not important. This allows the data center to leverage its assets in a logical and cost effective way.

4 – End-to-End Quality of Service – Quality of Service (QoS) is not a new concept for the data center. It has been implemented at specific points in the infrastructure, most notably the network for quite some time. There are also some storage systems that have a QoS feature and some hypervisors also have it. But QoS is seldom implemented end to end, from the application to the storage system. Data mobility solutions need to facilitate a communication between these various levels so that application data can be positioned on appropriate storage hardware and even rerouted across storage networks so that applications can meet the needs of the users while remaining cost effective.

5 – Reduce Management Overhead To Achieve Greater ROI – The key to success in the future data center is automation; this includes the storage infrastructure. A policy engine based on QoS that automatically moves application data either as access demands it, or in advance of demands when a predictable pattern of use can be established, should be at the heart of a data mobility solution. This allows the IT professional to fully leverage the storage assets while automatically adjusting to meet user expectations. With a QoS enabled data mobility solution, tuning of storage performance is no longer a disruptive driving process but one that is automatically handled by the software and then fine-tuned by the storage administrator as time allows.

Conclusion

Today, storage that supports the VMware infrastructure has little ability to mimic the capabilities of the hypervisor. The storage side of the application workload is typically siloed on a single storage system and movement of that data between systems is by brute force, if at all. Additionally, the decision to move those volumes is often manual after users complain of a performance problem. Data mobility incorporates the features of the storage infrastructure that are already working while adding capabilities that it does not have. Data mobility is a good example of these next generation capabilities. The ability to abstract the physical location of an application’s data from a siloed volume, and to automatically move that data as user demand changes, allows all storage assets to be fully leveraged while keeping users satisfied.

Sponsored by ioFABRIC

ioFABRIC is an example of a company on the cutting edge of delivering solutions that “VMware” storage by abstracting data and making it more mobile between storage system. Their Vicinity Software is a type of storage virtualization software designed to meet the performance and economic challenges of the new software-defined data center. ioFABRIC is looking for beta customers and storage industry partners, contact them to get involved at info@iofabric.com.

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Article
3 comments on “It’s time to “VMware” Storage
  1. Fred Uygur says:

    Does not the eventual migration to 100% flash storage obviate this need? All App’s would run at speed of memory regardless of location. Once the cost curve crosses disk and SSD (ETA 2016) pricing why buy anything but 100% flash? Why deal with “complexity” of managing these virtual volumes?

    • George Crump says:

      Not really. First the “eventual migration to 100% flash” may never happen. There will always likely be another tier in the data center either High Capacity disk or cloud storage. Also even if we do get to 100% memory storage it will not be 100% flash. It will be a mixture of newer memory technologies and various flash technologies. I doubt seriously we will ever get to a single storage tier. -George

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,238 other followers

Blog Stats
  • 1,552,037 views
%d bloggers like this: