Reducing Storage TCO with Private Cloud Storage

With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This article will look at the ways that Private Cloud Storage can address the TCO shortcomings of legacy storage.

A Less Expensive Baseline

One of the simplest ways to reduce the TCO of a storage system is to make its entry point less expensive. Legacy storage systems are designed with custom hardware, custom silicon and very sophisticated, proprietary software. Even the drives that go into these systems are sometimes customized specifically for the vendor. This can result in a much higher starting price.

Years ago when the standard Intel platform was unable to provide enough performance to deliver the I/O required by their attached storage systems, customized hardware made sense. Now, however, the common midrange Intel platform offers more than enough performance for many storage I/O activities.

Private cloud storage systems take advantage of this fact by leveraging off-the-shelf Intel hardware, installing off-the-shelf hard drives, and then loading the cloud storage software onto that stack. These servers are linked or clustered together to form a grid of ‘nodes’ that deliver a scalable, global pool of storage for the data center.

Start with Less

In addition to leveraging off-the-shelf hardware, private cloud storage systems also allow for a much more rapid and seamless expansion of disk capacity. Legacy storage systems, because of their more complex expansion processes, are often bought with a significant amount of extra capacity. This is especially problematic when dealing with hard drives since the price per gigabyte continues to go down over time, making the purchase of unneeded capacity upfront a significant waste of capital investment dollars.

Private cloud storage allows for capacity to be treated as a just-in-time inventory item. Most businesses utilizing private cloud storage tend to buy in six-month increments rather than three-year cycles. This means that not only do these customers avoid purchasing capacity until it’s needed, they also enjoy an approximately 15% reduction in the cost of that capacity when the time comes to buy it.

Low Cost of Entry

One attribute that attracts businesses to cloud storage is its very low cost of entry. Unlike traditional storage that typically requires a large, upfront investment, cloud storage capacity can be provisioned in small increments that are tied to what the actual storage requirements are for the business. The following table illustrates the very low cost per GB of cloud storage. While some of this information, like cost of operations, is subjective, it still demonstrates that when organizations deploy private cloud storage utilizing a software-defined cloud storage solution, like those from Cloudian , they can achieve a very low cost per GB.

cloudian table

Run at Full Capacity

Most legacy storage systems are purchased with the intention of leaving a significant amount of that capacity unused for the life of the system. It is never the intent that the system be 100% full – or even 80% full. The first reason for this is that most legacy storage systems have a serial data architecture. This type of ‘monolithic’ design funnels all I/O through a single set of controllers to a given set of disks. If every drive was filled to its maximum, the storage controller would not be able to sustain the performance demands created by the disks it supports.

Private cloud storage on the other hand has multiple controllers (server/nodes) that communicate to the disks inside each node and provide parallel access to all the discs across all the nodes at the same time. The result is that the sustained IOPS of the environment can be significantly higher than that of legacy storage, with its serial architecture. The result is that the private cloud storage system can be run at full capacity without fear of a performance bottleneck. As more capacity is added to the system, additional processing power, network bandwidth and storage I/O bandwidth come along with it.

Another reason that legacy storage systems are not run at full capacity is that the cost to migrate data from one system to another, and the time involved, can be significant. The motivation then is to move to a new platform as soon as it becomes available or as soon as the old storage system is fully amortized so that the data on that system is not so large that the migration task would take too long.

Flexible Refresh Cycles

A private cloud storage system on the other hand never requires a full data migration event. Instead, nodes are continually added to the system as capacity demands increase. And when newer, higher performing nodes become available, the old nodes can be gradually decommissioned since data is dispersed or replicated automatically to other nodes, providing the redundancy required to support live, non-disruptive upgrades. The ability to bring in new nodes and slowly expire old ones also eliminates the complexity of calculating the fully burdened cost of storage. With private cloud storage, there is no big migration event that consumes time and administrative resources, so the cost of storage is essentially the total cost of the nodes that are purchased and any applicable maintenance contract.

In addition, private cloud storage that is “software defined” enables businesses to take advantage of a faster hardware refresh cycle. As new servers and disk storage comes to market, new nodes can be implemented which can leverage these resources to deliver improved performance and enhanced storage efficiencies. With traditional storage technology, on the other hand, you are tied to the storage vendor release cycle which is typically a 3-year cycle.

Elimination of Data Protection Costs

Private cloud storage systems, like those from Cloudian, leverage a redundant array of independent nodes (RAIN) instead of the more common RAID that leverages a redundant array of independent disks. This provides data reliability not only within each node but also across those nodes. The result is a data protection scheme that has the same overhead as RAID-5 but can be 10X more reliable. This means RAIN can sustain more failures more often with less of a performance impact than the equivalent RAID technology.

The result is that many companies have decided two object storage systems replicating data between each other, can eliminate the need for a separate backup disk or tape storage system altogether. This makes a dramatic impact on TCO by eliminating a whole tier of storage and an entire process (backup).

Reliability like this also improves staff productivity. The sudden need to drop everything and fix a failed hard drive because data is at risk or performance is suffering is a key cause of escalating operational costs. An important aspect of this RAIN level of reliability is that a failure of a node or a disk does not necessarily require immediate IT intervention, since data is still fully protected to the 10th degree. This provides another reduction in operational costs as maintenance to the private cloud storage system can be a scheduled and predictable process rather than a reactive, unplanned event that could potentially impact application service levels.

Reducing Soft Costs

Soft costs are those recurring costs outside of the actual acquisition of the storage system, such as the cost to power, cool and operate it. Since private cloud storage systems are node-based and data can be moved between groups of nodes locally and to off site data centers, the type of node used as well as the drives used in that node can vary given the task at hand. For older data sets, groups of nodes can be implemented with high-capacity, spin-down hard drives that consume less power.

IT staffing is another key soft cost. Operational costs are typically hard to quantify but private cloud storage systems have repeatedly been shown to allow a single storage administrator to manage petabytes of storage, something unheard of with legacy storage systems. The reliability of the system, as mentioned above, as well as the multiple nodes acting as one, are key reasons why a single storage administrator can manage this much capacity.

The Organic ROI

Private cloud storage, thanks to its organic nature, has the ability to adapt to the changing needs of the environment. This may be its single biggest TCO factor. The one thing that the data center can never predict is what business demands will be 3, 5 or 10 years into the future. A storage system selected today may be totally inappropriate for the demands of tomorrow.

Private cloud storage, however, is unique in that it has the ability to adapt on a granular level to the demands of the environment. It only takes the provisioning of a few different types of nodes with unique configurations to address today’s business demands. At the same time, these nodes can adapt and become a part of the overall storage infrastructure and it is this flexibility and adaptability that helps to keep operational costs and TCO in check.

Summary

One of the biggest challenges facing the data center is how to deal with the growth of unstructured data. By some accounts, unstructured data now accounts for 80-90% of all net/new data growth in the data center. Unstructured data can come from a variety of sources including user files, data from an analytics initiative (big data), rich media or more traditional archive data. Unstructured data stores require storage systems that can scale at a very granular level to minimize upfront costs but at the same time scale massively to address the needs of the modern enterprise.

Legacy storage, on the other hand, when used for these purposes has difficulty delivering an acceptable total cost of ownership. Private cloud storage may be the ideal alternative to keeping TCO in check while addressing the needs of the business.

Sponsored by Cloudian

Click Here To Sign Up For Our Newsletter

As a 22 year IT veteran, Colm has worked in a variety of capacities ranging from technical support of critical OLTP environments to consultative sales and marketing for system integrators and manufacturers. His focus in the enterprise storage, backup and disaster recovery solutions space extends from mainframe and distributed computing environments across a wide range of industries.

Tagged with: , , , , , , , , ,
Posted in Article
2 comments on “Reducing Storage TCO with Private Cloud Storage
  1. Tim says:

    Well, this was a good article on the TCO of private cloud storage. Part of the TCO is how the object storage vendor charges for the use and support of their software. Historically, if only the software was provided by the object storage vendor, then the customer purchased their storage server hardware separately and paid a “subscription” fee to the storage software vendor based on how much of the storage capacity in the cluster was actually being used. The cost per GB could be tiered based on crossing certain thresholds of storage utilization in the cluster. Vendor support could be included in the “subscription” charge or broken out separately. The customer would have the storage hardware vendor to go to for warranty and extended warranty service on the storage hardware.

    In the case of storage appliances provided by a storage software vendor, the customer pays the cost of each storage appliance and usually an annual “maintenance” fee based on a percentage of the MSRP for each storage appliance. Some level of software vendor support could be included in the annual “maintenance” fee, which also included software updates. There would be a standard warranty and/or extended warranty available on the storage appliance. With this model the storage software vendor is not charging for the amount of storage capacity in the cluster that is actually being used. This would be similar to a “perpetual” license to use the software provided that you paid the annual “maintenance” fee for each appliance.

    Which is better? There are more “moving parts” involved in building your own storage servers and installing the software yourself. You might do this if your storage server requirements aren’t a good match with the appliance models available. If you don’t want to spend the time to do all that work, then the storage software vendor’s appliances will save you time and effort. You will also have “one throat to choke” if the storage software and hardware are supplied by the same vendor, who presumably would have expert knowledge of both.

  2. Nava Certus helps you migrate to, from, and between cloud storage platforms including Google Drive, OneDrive, Sharepoint, and more. With Nava Certus, Google Apps users can move data from their local file server to Google’s cloud. Free trial available. Start your Google Drive Migration today! http://www.linkgard.com/

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,245 other followers

Blog Stats
  • 1,563,143 views
%d bloggers like this: