The Data Management Imperative vs. IT Reality

Data management is hard, and with the massive growth of data this task is becoming even more difficult. But, data growth is also making data management an imperative. Organizations need to figure out a way to better manage their data so that it is always available yet stored cost effectively. Because if they can’t, that data, the very thing they are counting on to drive their business forward, will bury them.

What is Data Management?

Data management is the process of moving data from one type of storage to another based on a policy. One common policy is moving all data that has not been accessed in a period of time from production storage to less expensive secondary storage. However, organizations may also want to move data based on type, source or future use case. As an example, video surveillance data may never need to be stored on fast production storage; instead, the system can immediately route it to secondary storage and subsequently archive it.

For most organizations, the objective of a data management process is to drive down the overall cost of storage. An effective data management strategy however can also simplify the data protection process and position the organization to retain information more accurately, making it easier for them to meet increasingly strict regulations about data governance.

The problem is that data management solutions are cumbersome, not only to implement but also to maintain over time. Data management difficulty creeps in as data is moved between various types of storage. It is necessary to identify data, copy it across a network and then remove it from its original location. There is plenty that can go wrong in that process and if something does, administrators will certainly feel the wrath of users who can’t find their data. IT needs a new way to store data so that management of that data comes more naturally.

The All-Flash Data Center – Eliminating Data Management?

One solution is to give up and stop managing data altogether by creating an all-flash data center. This “solution” solves the challenges of data management by putting all data on the fastest tier possible. All-flash vendors are trying to make the case that flash, thanks to price drops and data efficiency technology (deduplication and compression), is cheap enough to be the only tier needed for all data types.

The reality is that high capacity hard disk drives and cloud storage is still far less expensive than flash storage. It is fair to assume that someday flash will be the same price as hard disk storage, but when that time comes there will be higher performing alternatives to flash, which organizations will want to use for their production applications, and flash will become the capacity tier. Eventually even the all-flash data center will need data management.

Flash to Flash to Cloud – The Right Data Management Strategy?

An alternative is to create a flash to flash to cloud architecture. This architecture uses an all-flash array to hold not only active data but also near-active data (data accessed within the last year). The system could use two tiers of flash to, within the system, internally manage data between a high performance NVMe flash tier and a high density (and less expensive) all-flash tier. Then when data has gone more than a year without being accessed, it can be archived to a cloud tier, either an on-premises object storage system, or a public cloud storage provider, or a combination of both.

The flash to flash approach minimizes the amount of external data movement required for data management and automates internal data movement (flash to flash). The result is a simplified data management process that IT can easily implement and more importantly maintain.

In our blog, “Why Data Movement Breaks Data Management – and How to Fix It”, we detail how the flash to flash to cloud model fixes the component of data management that causes the process to break and forces organizations to abandon the project, data movement.

Watch On Demand

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,895 other followers

Blog Stats
  • 1,187,658 views
%d bloggers like this: