Why Data Management is Cool Again

In the early data center, IT managed data because it HAD to. The cost of online storage was too expensive to hold all the data needing processing. But over time storage became less expensive and storage system vendors created architectures that could scale to near infinite proportions. Data management seemingly became unnecessary. Then flash was implemented, eventually all-flash, resolving the final concern – performance. It seemed that data management was dead.

But data management, or at least the need for it is not dead, the number of solutions that provide data management has increased rapidly over the past year. Data management is cool again. But how and why we perform data management fundamentally changed.

A Storage System for Every Occasion

One of the reasons data management is back in style is we’ve seen a mass proliferation in the number of storage systems are available to organizations. There are at least four reasons for this proliferation.

First, there is a wide variety of workloads in the data center today. We have applications running on traditional scale-up architecture and there are applications that run on modern scale-out architecture.

Second, we’ve also have unprecedented growth in unstructured data, driven mostly by machine and device-generated data. How this unstructured data is used and how responsive the storage system needs to be varies wildly by use case.

Third, we’ve also seen a proliferation in companies with multiple data centers. Such companies want to be able to move data sets rapidly between data centers, as well as back and forth between their data centers and the public cloud. Also each public cloud provider has multiple classes of storage within their architectures, all at different price and performance levels. Customers also want to move data between these different service levels based on their needs at the moment.

Finally, while flash storage certainly reached cost parity with some hard disk configurations, it is still not able to compete on a cost-per-GB basis with some of the systems designed to be cheap and deep storage.

The Art of Data Management

There is definitely a need to move data between these various storage offerings but that movement has to be done at the right time with no interruption to current processes. The size and amount of data, as well as the quantity of files/objects means deciding what data should be moved where and when is probably beyond human capabilities.

In our on demand webinar, “The Art of Data Management“, my colleague W. Curtis Preston and I discuss how data management needs to change to deal with both the realities of data growth and the overwhelming number of storage systems available to store data.

Click To Register

Watch On Demand

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
%d bloggers like this: