Since the introduction of flash, two primary methods for addressing application performance have emerged. First, the Hybrid Array that, using algorithms based on a dataset’s frequency of access, caches active data. The challenge with any algorithm that predicts future access is that there is a chance of degraded performance because of a cache miss. The other option is an all-flash array that eliminates performance unpredictability at the expense of IT budget. Storage Quality of Service (QoS) solutions attempt to mitigate hybrid unpredictability while maintaining its affordability versus the all flash alternative.
If the data center can identify an all-flash array that can meet their capacity demands within their budget constraints, then high performance is the way to go. Assuming there is enough budget, putting all the data center’s workloads on the single all-flash system means there is limited need for QoS. The exception may be using QoS as a rate limiter so that applications or users that don’t need high performance are never given it.
The reality is that most data centers can’t afford an all-flash array for all the workloads in their data center. In fact, many all-flash arrays are purchased to solve specific application performance pain points. Most data centers need to balance performance and cost. Budget realities lead them to buying a hybrid system. While flash enables larger cache areas than in the past, there is always that nagging concern over a cache miss impacting application performance.
To address this challenge, a few hybrid storage vendors provide a QoS feature in their array. QoS allows administrators to set specific performance expectations on certain applications to almost guarantee a particular level of performance for that application, virtual machine or LUN. It is important to understand the degree of granularity at which a storage system can set QoS parameters.
Centrally provisioned performance requires that either the all-flash or hybrid storage systems described above be the only primary storage system in the data center. For many data centers it is hard, if not impossible, to migrate all workloads to a single system. A system that does this requires unique capabilities. The reality is that most environments will have multiple systems. Software Defined Storage (SDS) has the potential to allow data centers to manage this mixed vendor environment successfully. The challenge is that the current generation of SDS is missing one key feature, the ability to enable QoS across storage systems and operating environments.
As we discussed in our recent webinar, “Three Reasons SDS Needs to go Back to School” the next generation of SDS needs to inventory its current storage assets automatically to determine the performance capabilities of each storage tier. It should allow the administrator to create service levels based on the performance abilities of each tier. The SDS 2.0 software could then move data between tiers, internal to the storage system, or other tiers of other storage systems.
For example, in a recent article “Flash + Object – The Emergence of a Two Tier Enterprise” Storage Switzerland discussed how an object storage system is a perfect compliment to an all-flash array. The combination allows the all-flash array to be used for active, and near active data, and the object storage system can cost efficiently and safely store inactive data for decades. This missing ingredient to this solution is how to identify and move data between these two types of storage systems which will likely be from two different vendors. SDS 2.0 could be that missing ingredient.
To learn more about the next generation of SDS, please register for our webinar that is now available on-demand. By registering, you will also be able to download our white paper “What is Software Defined Storage 2.0?”