The Time Cost of a Storage Refresh

For most data centers, periodic storage refreshes are a way of life. Many times, the storage vendor causes the refresh by pricing out-year maintenance renewals to the point that it is more cost effective to buy a new system than to continue using the original. Other times, the storage system reaches either its maximum capacity or, increasingly, it can’t keep up with the performance demands of the modern data center. The problems with storage refreshes, however, extend far beyond the cost of buying a new system. Each storage refresh is time consuming and puts the organization at risk to either data loss or application outage.

The Cost of Time

Beyond hardware acquisition, the cost of time has to be the most expensive aspect of a storage refresh. Time is the one thing that most IT professionals do not have and implementing a new storage system takes a lot of it. A storage refresh is more than just implementing a new storage system; it is replacing an old one. Given today’s data sets, the time it takes to copy data from the old system to the new is often measured in days. To make matters worse, applications and users are counting on the current system, so their data has to be migrated to the new system without interruption.

There are data migration tools that can help with the process but they also require time to migrate all the data. Then, even when complete, most applications will need a complete shut down so they can make a final clean sync. Any IT person with more than a few years of experience has a war story about an application that did not start back up correctly after a shut down, which brings us to the next point.

The Cost of Data Loss

The loss of data or application availability is another problem. Even after careful migration, sometimes copies don’t work as they should. Or, as we said before, applications don’t power back correctly or administrators miss a configuration change that still references the old system. At Storage Switzerland, we’ve seen situations where an application was still, unknowingly, storing a portion of its data on the old system. It was not until the old system was turned off that the user realized what had been happening. The same holds true for data protection. The dozens, if not hundreds, of backup jobs now need updating to point to the new data store.

Rolling Architectures

The truth is that you can’t stop IT refreshes. Technology will continuously march forward. An alternative to this three- to four-year “stop everything” event is to do upgrades in smaller chunks. In our recent white paper “The Post-Virtualization Refresh: Is Hyperconvergence the Answer?”, (available for download as soon as you register and begin to watch the on demand webinar) we discuss why hyperconverged architectures are an excellent example of the rolling upgrade concept. Instead of a three- to four-year upgrade for each technology tier (compute, networking, storage), upgrade continuously. With hyperconverged, a mini-refresh happens every time you add a new node to the cluster. As a result, the hyperconverged architecture continuously refreshes itself, and IT never has to face the “stop everything” event.

Any refresh has a natural counterpart, migrating data from the old system and decommissioning it. A hyperconverged environment simplifies the process. Each time a node is added data is automatically rebalanced to take advantage of the capacity and performance that the new node offers. Similarly, if a system needs to be decommissioned, it can be designated for disconnect from the cluster. The hyperconverged system will once again rebalance data, assuming that the node to be retired is no longer in the cluster. Once the data is re-balanced, the administrator is notified that it is safe to remove the old node from the environment.

George Crump is the Chief Marketing Officer of StorONE. Prior to StorONE, George spent almost 14 years as the founder and lead analyst at Storage Switzerland, which StorONE acquired in March of 2020. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Prior to founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , ,
Posted in Blog
One comment on “The Time Cost of a Storage Refresh
  1. Kevin Liebl says:

    George – Excellent topic and excellent blog. To add a little more perspective, the market is changing as IT professionals move from CapEx purchases to OpEx pay-as-you-go services. Current “as-a-service” approaches don’t just lessen the pain – they avoid the problem altogether. There’s zero refresh cost or downtime if the responsibility is on the vendor to non-disruptively add/replace/remove nodes when needed and more importantly, on a regular contractual basis as new technology enters mainstream usage (e.g., SSDs, large capacity HDDs, higher performance I/O ports). These upgrades happen in the background and the resources stay current and up to date. At the end of what would have been the typical 3-5 year cycle for a CapEx investment, the as-a-service solution isn’t out-of-date and/or obsolete. It is current and state of the art technology because it has been incrementally updated and upgraded all along. As more enterprises chose pay-as-you-consume models for all of their enterprise IT resources, even on-premise solutions, the whole concept of the refresh cycle may fade away completely.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,783 other followers

Blog Stats
%d bloggers like this: