Achieving Cloud Efficiency Without Cloud Scale

Our last blog, “Are Cloud Providers Really Efficient“, discussed how cloud providers, from a resources perspective, aren’t really any more efficient than the typical data center. The provider’s use of automation and their scale is what separates them from the traditional data center. Traditional data centers can’t afford to scale to the size of a cloud provider and the multitude of disparate storage systems in use at the traditional data center makes automation almost impossible.

The storage systems that traditional data centers use often force compromise. Their storage software acts as a roadblock to the raw performance capabilities of flash media and high-speed storage networks. The software also doesn’t take full advantage of multi-core processing. These systems reach their performance limits long before the raw specifications of the hardware resources indicate that they should.

Watch On Demand

The lack of full resource optimization leads companies to purchase additional storage systems for specific use cases. Many data centers today use a minimum of six different enterprise storage systems to meet the application demands of the organization.

It may seem that scale-out storage infrastructures solve the problem. In reality, these systems are even worse at efficient storage resource utilization. Most organizations need to scale because either they are out of capacity or out of performance, but rarely do they scale because they are out of both. That means that the scale-out system is adding two resources when there is a need for only one. Additionally, remember that those resources aren’t being used efficiently in the first place.

In the cloud, the number of customers hides the inefficiencies of a scale-out architecture. Cloud providers can add so many nodes to the storage infrastructure that performance and capacity limits are rarely tested but the per node resource utilization is still very low.

It is tempting to just push all data assets to the cloud and make storage management somebody else’s problem. There are good reasons however for IT to maintain control of its data assets. In most cases, organizations will find that within a few years the total cost of storing data in the cloud will surpass the cost of on-premises storage.

The reason is data’s permanence creates a challenge for the cloud pricing model. The cloud can add computing power to an application without requiring much change to the application but data needs to exist for the application to operate on it, it can’t just suddenly appear. Cloud data storage is a long-term proposition. Renting CPU power from a provider makes sense because it is temporal in nature. However, storing data in the cloud does not make sense because it is permanent.

Efficiency is More Important than Scale

Traditional data centers can’t provide better services by becoming a smaller cloud. These organizations can’t reach cloud scale. The lack of scale means that using an architecture designed for the cloud doesn’t make sense. Instead of trying to recreate the cloud in their data centers, organizations need to take a different tack.

The commodity aspect of cloud architectures, using storage software to drive commodity storage hardware, makes sense. Commodity hardware though doesn’t always mean scale-out. Instead of scaling the storage architecture to hide inefficiency, organizations need a scale-up architecture that is extremely efficient with the resources provided. IT needs to select software that can extract true maximum performance from the storage media, storage compute and storage network.

Traditional systems create a problem because of their inefficiency. For example, once a solid state drive (SSD) is installed in a storage system, its performance drops 90% or more because of storage software overhead. The lack of resource efficiency forces customers to buy additional SSD drives and overpowered CPUs in their storage systems, or use the most common workaround, which dedicates storage systems to specific workloads. This strategy increases inefficiency by forcing IT to manage and support on average six to eight storage systems in their data centers.

A Storage Operating Environment

To solve the problem and make data centers more competitive with the cloud, the storage software needs to change. Storage software needs to follow a model similar to VMware. With VMware, IT no longer worries about what type of servers the hypervisor cluster contains. The hypervisor simply leverages the hardware the best it can.

Instead of designing on-premises storage software for a specific use case or hardware type, vendors need to develop a storage operating environment that works on various types of hardware while still extracting maximum performance from it. It should also support multiple protocols like block, file and object (REST). Arming IT with this type of software enables them to use a single storage software solution for the entire data center, even when using multiple hardware devices or when applications require specific protocols.

Full Resource Utilization

Since traditional data centers can’t afford to scale like cloud providers can, storage software vendors need to focus on storage software efficiency. These vendors need to rewrite storage software so that it supports true multi-threading enabling the software to take full advantage of modern multi-core processors.

The storage software also needs to fully exploit flash media, so there isn’t a 90% drop in performance. Even well established algorithms like RAID, data protection and snapshots need revisiting. A system that extracts the full raw performance of a flash array or even an array full of hard drives enables IT to address multiple workloads per system, making the on-premises architecture more efficient by reducing the number of physical systems required.

Putting it Together

To offer a better service than the cloud, IT needs a single storage operating environment, which is a software application that manages all of the storage system types in the environment. The software in addition to providing data services like volume management, RAID and snapshots, needs to accelerate performance of commodity storage servers by fully utilizing the resources at its disposal.

In our next blog, we’ll discuss an area where IT can provide better capabilities than the cloud for data protection. The storage operating environment needs to have the ability to protect itself. In our fourth blog in this series, we’ll discuss how the single operating environment enables automation so the organization can provide self-service capabilities similar to the cloud.

Watch On Demand

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,438 views
%d bloggers like this: