Most data center administrators design storage infrastructures for peak load so when applications or users need the MOST performance or capacity, the system is able to meet the demand. The problem is that most of these peak demands occur occasionally and are often quickly over. Overprovisioning of storage resources to meet these demands means that most data centers have storage systems that are essentially idle and have massive amount of excess capacity.
Organizations would dramatically reduce the cost of IT if they design their infrastructures for the normal state instead of a peak state, but there are obvious risks associated with designing for the norm. If a peak state occurs and IT is not prepared, or unable to respond quickly enough, then application performance can suffer and users may, or will probably, complain. What the data center needs is the ability to scale in real-time, automatically without intervention.
The Art of Scalability
Scalability comes in two forms. First there is the systematic growth of the data center that occurs when applications or users are added to the environment. This growth may require additional compute as well as additional storage resources. The amount of time that IT will have to plan for this growth will vary from organization to organization. But even in the most meticulously planned data center, growth is a challenge. Organizations need to purchase new systems and move data to them.
The second form of scale is the ability to respond to unexpected events, typically performance, and to do so rapidly. The solution to these events is move a workload’s dataset or a subset of it to a higher performing storage platform. The problem is that in many cases, the data movement between platforms is disruptive, causing either application outage or temporarily poor response time.
Real-time Scale Requirements
There are several virtualization technologies that claim to provide flexible scaling, but few can claim real-time scale. They have to provide room for the reality that there may be an interruption in access or performance while data is shifted around. Data virtualization changes the game and makes real-time scale a reality.
Real-time requires a nimble solution that can operate on data at a very granular level. Most virtualization solutions can only operate on entire volumes. Data virtualization can also support a variety of storage types including block, file, object and cloud. Most other forms of storage virtualization only support one storage system type, and few support the cloud as an extension.
The combination of granular data movement and a wide variety of devices means that data virtualization can respond rapidly to both expected and unexpected peaks in performance and capacity with no interruption of services.
Throughout this series we have built the case for data virtualization, which allows a finer grain data management model than has previously been available. Combined with policy driven control, it enables an organization to build a storage architecture that automatically responds to the demands that applications and users place on it. The net effect is a storage architecture that costs less from both an acquisition and a cost to manage perspective.