What is Data Management Sprawl?

Data Management is the art of intelligently moving data to the tier of storage that best balances the performance requirements with the cost of storing it and stage of its life cycle. In a perfect world, organizations would put all their data on flash in their data center. Everything would be fast all the time. The issue is flash is expensive and in short supply, and data center floor space is at a premium. Given these realities plus the unrelenting growth of data, organizations need to consider a data management strategy that can address their current and long-term needs. The dilemma is there are many partial solutions out there, which is part of the problem: data management sprawl.

Sprawl is Contagious

Data centers are experiencing sprawl in multiple forms; server sprawl, virtual machine sprawl, and data growth is a form of storage sprawl, which leads to a sprawl in the number of storage systems that organizations purchase. Another sprawl is the sprawl in solutions that are supposed to manage sprawl, data management applications.

In terms of data management sprawl there are data management applications that focus on email, media and entertainment (M&E) files, oil and gas files, healthcare files, IOT and Sensor data… the list goes on. There is also sprawl in which aspect of management these various solutions provide. Most tend to focus on either identifying active data and moving it to flash or identifying old data and moving it to the most cost effective storage medium available. Oddly very few do both.

There is also sprawl in the number of storage systems that claim to be ideal targets for a particular type or class of data. There are of course the all-flash systems that want active data, then there is a never-ending parade of storage systems that want to store inactive data, ranging from scale-out NAS, to object storage systems, to the cloud.

Eliminating Data Management Sprawl

Data Management is really about one thing, managing data. All data has metadata that, assuming the data management solution knows how to harvest this information, should tell it everything it needs to know about the data it’s getting ready to manage. From there it should be able to place data on faster or more cost effective storage based on policies set by the user. Those policies are critical since there are some data that need to override the default rules like “don’t put the CEO’s files on slow storage, ever” or “don’t put financial data in the cloud, ever”.

Armed with a proper understanding of the metadata that the data provides, the data management solution should be able to place the data where it needs to be and also move that data as its metadata changes, or the current stage of its life cycle evolves. The result should be one software application that can be used universally across industries, use cases, and storage types/locations.

Eliminating data management sprawl is just the first step in winning at data management, join experts from Storage Switzerland, Dternity and StrongBox Data Solutions on our live webinar on April 13 at Noon ET / 9 a.m. PT.

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,131 other followers

Blog Stats
%d bloggers like this: