Here is the goal: Having a data center that consolidates all data onto a single storage system from one vendor. The hope is that by consolidating to a single system, the organization will reduce management headaches that IT administrators face when managing storage for their organizations. And that technique works…until it doesn’t.
At some point, an application or data set arises in the data center that needs a capability or a price point that the single storage system can’t handle. So, the organization buys a “special” storage system for that one use case. And then another, and another, and within a few years there’s fragmentation in the storage infrastructure. Again. But is storage fragmentation bad?
Fragmentation is Good?
Actually, there are sound justifications for storage fragmentation. It is very difficult to consolidate storage systems given the wide range of storage media, storage types and application use cases. For example, there’s flash storage. All flash is fast, but there are tremendous differences in the performance and endurance of the various flash offerings on the market. There are also latency differences depending on where IT inserts flash in the storage infrastructure, such as within the memory bus, PCIe slot, SAS bay, or in a shared storage system connected by fibre channel or Ethernet.
Flash is, of course, just the performance side of the equation. When there is a need for cost effective, high capacity, long term storage, IT professionals have to choose between another wave of solutions ranging from NAS to Object Storage to Cloud Storage. NAS may offer better performance, while object storage may support high file counts and cost less. Cloud storage may eliminate the need to expand the data center. Again, each solves a specific storage and business requirement for a given data set.
Storage fragmentation is actually good in certain respects because it allows IT to provide applications and users with the exact storage quality of service they demand.
The Problems with Fragmentation
But here’s a problem with fragmentation: The value of data changes over time. At first, data may need to be on high performance storage. But over time, high capacity storage may be a more appropriate place to store the data. Then later, it needs to move back to high performance storage. Since needs change over time, IT can’t simply decide to place data on one storage system and forget about it forever. Someone, or something, needs to continually monitor data and move it based on its current value within the organization and its storage requirements.
The reality is that most data centers are now coming to grips with the fact that fragmentation, at least for storage hardware, is a fact of life.
Data Mobility is the Key
Data mobility is the key to deploying a fragmented storage infrastructure without having the management of the data within that infrastructure overwhelm IT. As expected, there are several solutions available to organizations needing to solve the mobility challenge.
Storage consolidation, down to a single type of storage, is once again a possible remedy. But most vendors’ remedies call for a single all-flash storage system. However, the all-flash data center solves the problem ineffeciently. It’s like cutting a watermelon with a sledgehammer. There are more intelligent software defined approaches, such as storage virtualization and data virtualization.
In our next entry we will compare these three approaches – the all-flash data center, storage virtualization and data virtualization – to so you can determine what is the best approach for you.