How Legacy Storage Breaks The Cloud

As business data continues its relentless migration to the cloud, providers of cloud hosted environments and applications are creating the perfect storm for their legacy storage systems, a storm that will break the ‘cloud promise’. These environments require high scalability, performance and reliability, all at a cost effective price point, both up front and over time. The inability of legacy storage to meet these demands threatens to eat away at the cloud provider’s profitability and competitiveness.

Storage Requirements For Cloud Providers

Like traditional data centers, the cloud provider needs shared storage systems, most often using a Storage Area Network, to support a virtual infrastructure. Typically, the cloud provider is motivated to be more virtualized than the legacy data center was and to have much higher virtual-machine-to-host densities. This combination places unique requirements on the storage system that supports the cloud provider.

Scalability

Cloud providers need a storage solution that can scale to meet the growing demands of their businesses. But this scalability must be focused on more than sheer size. Ideally the cloud provider needs a storage system that can start small and then scale in very small increments as the business grows, but without limitation. Ideally this growth should be in lockstep with the on-boarding of additional customers.

Performance

The goal of the cloud provider is to optimize their server/host resources so that maximum return on investment can be achieved. This optimization means that every host in the environment will be capable of generating I/O on a sustained basis as well as peak I/O on an unexpected basis.

The design of the storage system also plays a role. If the storage system has a single, centralized architecture it must deal with the extremely unpredictable I/O patterns that these environments are known for. If the storage system is a series of independent storage devices then it must still provide for virtual machine movement and high availability.

There is also a scalability aspect to storage I/O, in that most storage systems gradually deliver diminishing performance as additional hosts or virtual servers are connected to it. As with capacity, performance needs to scale as the environment grows too.

Reliability

The online provider feels the increased pressure to maintain uptime since they are servicing the needs of more than just one application or even one business. The ramifications of an outage impact many organizations and are almost always publicly broadcasted. The online provider’s storage system must be designed to have multiple points of redundancy instead of simply avoiding single points of failure. It also needs to provide rapid recovery from a failed state.

Cost Effective

Finally, the cloud provider’s storage system needs to be affordable. The way scalability, performance and reliability are delivered play key factors in decreasing their upfront and ongoing costs. But the provider’s storage system needs to be cost effective from a ‘raw dollars’ standpoint as well. They cannot afford to pay the premium markup common in most name-brand vendor storage solutions.

The Limitations of Legacy Storage

Legacy storage typically comes in two forms; a scale-up or scale-out architecture. With scale-up storage most of the upfront investment is spent on plentiful storage processing power so that as capacity and users are added to the environment, performance (throughput and IOPS) remains acceptable. Scale-out storage typically adds storage processing power and I/O capacity each time storage is added to the system. These systems start with a minimal sized cluster and then all parameters grow as capacity is added to them.

Limitations of Legacy Scale Up Storage

Legacy scale-up storage has an obvious upfront limitation in that the way it’s purchased. Buying everything, all at once, does not match the business model of the typical cloud service provider, a model which assumes customers will be added incrementally and charged on a monthly basis for services. This model means that such an investment in processing power and network I/O would go largely unused until the cloud provider grows their customer base to consume it.

In the cloud provider’s environment there’s often a problem with managing users’ expectations because the performance of a scale-up storage system is not consistent. For example, when the first set of users are placed on a new system they experience the best performance that system can deliver, often more performance than they were expecting, or paying for. But that performance only gets worse over time. In most cases the users will complain about that degrading performance, even though their expectations were unrealistically based on upfront performance that they were not actually entitled to. Legacy scale-up storage systems have no ability to limit performance so that these user expectations can be controlled.

In addition to the physical cost of the equipment there is the cost of the supporting infrastructure, which includes the storage area network, switches and host bus adaptors (HBAs). This single upfront cost is a deal breaker for many organizations. But even if there are dollars to make this purchase a legacy scale-up storage system may not be the best fit for the cloud provider.

For the cloud provider that can support the upfront cost of such a system there is also the eventual forklift upgrade that needs to be dealt with. This occurs when the storage system has reached a point where no more capacity can be added or, more likely, performance cannot be sustained given the workload.

When that point is reached, the cloud provider must once again bear the cost of a new upfront hardware investment. And once again, excess processing and I/O capacity will go unused until new clients are added to the environment. There is also the added cost of migrating customers and their data into the new system. These issues often mean that the cloud provider will forego an upgrade for as long as possible so that the next system will be more fully utilized when installed. Doing so though greatly risks customer dissatisfaction and risks not meeting performance Service Level Agreements (SLAs).

Limitations of Legacy Scale Out Storage

The potential answer to the problems of a scale-up storage system is a scale-out storage system. These systems do scale performance with capacity expansion. As a “node” is added to the storage cluster it comes with additional capacity, storage CPU power and network I/O.

The first problem with legacy scale-out storage is that in most cases these three components (capacity, CPU, I/O) have to be scaled at the same time and seldom does the environment need all three simultaneously. What typically occurs is that the nodes are added to address capacity and/or I/O issues and the CPU resource on these storage nodes goes largely underutilized. This is ironic since the goal of the cloud infrastructure is to maximize CPU resources and legacy scale-out storage ends up wasting them.

Finally, these systems are not tied into the actual needs of the hosts, so their scaling is independent of the host scaling. This means that the administrators have to manage two different scaling paths.

Limitations of Legacy Scale Out and Legacy Scale Up Storage

Both legacy scale-out and scale-up architectures require a storage network of some kind in order for cloud images to be shared. This is an added expense and another point of management.

Finally, there are the somewhat obvious ‘vendor lock in’ issues that have always existed in these systems. But they’re more difficult to deal with in the cloud data center because of the massive scale of the storage infrastructure and the continuous pressure to reduce costs. By doing so storage vendors are making it harder to move to a software defined data center by requiring that storage, network and server tiers be managed as separate stacks.

Modern Storage for a Modern Data Center

The answer may be to move to an environment where the server, network and storage stacks are converged into a single software-defined layer that can provide maximum flexibility, performance and reliability while reaching unparalleled cost effectiveness. In short, storage needs to do more than just participate in the cloud: it needs to fully leverage it by being part of the cloud.

Storage Switzerland’s next report “Designing a SAN for the Cloud Service Provider” will explain how this modern SAN topology is designed and what its benefits are over the legacy storage methods of today.

OnApp is a client of Storage Switzerland

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,246 other followers

Blog Stats
  • 1,564,813 views
%d bloggers like this: