For most data centers, periodic storage refreshes are a way of life. Many times, the storage vendor causes the refresh by pricing out-year maintenance renewals to the point that it is more cost effective to buy a new system than to continue using the original. Other times, the storage system reaches either its maximum capacity or, increasingly, it can’t keep up with the performance demands of the modern data center. The problems with storage refreshes, however, extend far beyond the cost of buying a new system. Each storage refresh is time consuming and puts the organization at risk to either data loss or application outage.
The Cost of Time
Beyond hardware acquisition, the cost of time has to be the most expensive aspect of a storage refresh. Time is the one thing that most IT professionals do not have and implementing a new storage system takes a lot of it. A storage refresh is more than just implementing a new storage system; it is replacing an old one. Given today’s data sets, the time it takes to copy data from the old system to the new is often measured in days. To make matters worse, applications and users are counting on the current system, so their data has to be migrated to the new system without interruption.
There are data migration tools that can help with the process but they also require time to migrate all the data. Then, even when complete, most applications will need a complete shut down so they can make a final clean sync. Any IT person with more than a few years of experience has a war story about an application that did not start back up correctly after a shut down, which brings us to the next point.
The Cost of Data Loss
The loss of data or application availability is another problem. Even after careful migration, sometimes copies don’t work as they should. Or, as we said before, applications don’t power back correctly or administrators miss a configuration change that still references the old system. At Storage Switzerland, we’ve seen situations where an application was still, unknowingly, storing a portion of its data on the old system. It was not until the old system was turned off that the user realized what had been happening. The same holds true for data protection. The dozens, if not hundreds, of backup jobs now need updating to point to the new data store.
The truth is that you can’t stop IT refreshes. Technology will continuously march forward. An alternative to this three- to four-year “stop everything” event is to do upgrades in smaller chunks. In our recent white paper “The Post-Virtualization Refresh: Is Hyperconvergence the Answer?”, (available for download as soon as you register and begin to watch the on demand webinar) we discuss why hyperconverged architectures are an excellent example of the rolling upgrade concept. Instead of a three- to four-year upgrade for each technology tier (compute, networking, storage), upgrade continuously. With hyperconverged, a mini-refresh happens every time you add a new node to the cluster. As a result, the hyperconverged architecture continuously refreshes itself, and IT never has to face the “stop everything” event.
Any refresh has a natural counterpart, migrating data from the old system and decommissioning it. A hyperconverged environment simplifies the process. Each time a node is added data is automatically rebalanced to take advantage of the capacity and performance that the new node offers. Similarly, if a system needs to be decommissioned, it can be designated for disconnect from the cluster. The hyperconverged system will once again rebalance data, assuming that the node to be retired is no longer in the cluster. Once the data is re-balanced, the administrator is notified that it is safe to remove the old node from the environment.