Curing Storage Blindness

As modern organizations struggle to keep up with the ever-growing data deluge, they have been forced to deploy and overprovision different types of storage in order to meet their Service Level Agreements (SLA), leading to unprecedented storage sprawl. Storage consolidation is the battlecry to resolve this issue, but the reality is that each application requires a different SLA and it is difficult, if not impossible, to find a single storage system that can meet all of them. The end result is that organizations are struggling with the need to manage as many as 20 disparate storage systems.

Storage Blindness

Managing data across many different storage solutions is a complex and time-consuming process and in most cases, storage admins are not able to really see what is going on in many of their storage systems because they lack the tools to do so. In other words, they are not really managing anything, but more reacting to problems as they arise.

The challenge is that the value of data changes with time. Data that is hot today may be warm or cold tomorrow and should be moved off expensive high performance tiers, but if a given data set becomes active again, it can suddenly justify moving from cold or warm storage to high performance storage. However, storage admins are unable to easily determine what data on a given storage system is hot, warm or cold. They also can not “see” what the exact capabilities of a given storage system are and how taxed those capabilities are with current workloads. Assigning a newly active data set to high performance storage that is already under heavy load may not solve the performance demand and may actually make matters worse.

With organizations looking for new ways to extract value from their data, they need a way to quickly determine what data they have, where it is located, and a means to make it readily accessible. They also need a way to determine which of their available storage systems is the best candidate for that data. In short, they need a way to manage their data instead of reacting to it.

Curing the Blindness

Meeting these challenges requires a new type of solution that enables users to accurately determine what is happening on any given storage system in the enterprise, as well as determining what types of data are located on it. It should also provide the means to dynamically and automatically move data to the most appropriate storage tier regardless of type, based on user defined data policies and objectives, but without disrupting applications in the process. Additionally, a solution would provide global visibility into workload requirements as well as utilization rates of data and provide recommendations to meet future potential storage needs.

Such a solution would also be able to transform disparate storage silos, regardless of type, into seamless, globally accessible resources across multiple disparate storage tiers whether they be in-server flash (DAS), file (NAS), block (SAN) or cloud (Object), which means it can transparently move data from any device to any device regardless of location, protocol, or device type. It would also be storage agnostic, thus helping avoid vendor lock-in.

In the recent past, storage virtualization was implemented in an attempt to consolidate disparate storage systems into a virtual global pool of storage in order to address these problems. However as my colleague George Crump discusses in his article, “The Road to Data Mobility – Consolidation, Storage Virtualization or Data Virtualization”, storage virtualization has a number of critical limitations. What is needed is a more granular approach to moving data. An ideal out-of-band, data virtualization platform would virtualize data by separating metadata from the actual data, while using standard protocols to also virtualize different storage tiers across a global dataspace thus providing a logical abstraction of all physical storage within that single global dataspace.

StorageSwiss Take

Organizations today need a way to stay ahead of storage sprawl while maximizing the efficiency of existing storage resources. With an appropriate out-of-band data virtualization solution that is storage system agnostic and provides true data mobility across multiple disparate storage systems, organizations would finally be able to dynamically manage their data efficiently and cost effectively while reining in storage sprawl.

Joseph is an Analyst with Storage Switzerland and an IT veteran with over 35 years of experience in the high tech industries. He has held senior technical positions with several OEMs and VARs; providing technical pre and post sales support as well as designing, implementing and supporting backup, recovery and data protection / encryption solutions along with providing Disaster Recovery planning and testing and data loss risk assessment in distributed computing environments on Unix and Windows platforms.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,891 other followers

Blog Stats
  • 1,185,559 views
%d bloggers like this: