How to Avoid the Storage Refresh

IT professionals look forward to storage refreshes as much as they look forward to a trip to the dentist. In fact, the dentist may be preferable. There are certainly times where storage needs to be upgraded but IT should do everything possible to make sure that happens as rarely as possible. When that refresh does happen, IT needs to be prepared with performance and capacity trending information so that the next storage system will last through the amortization cycle.

Thanks to the accelerating pace of technological advancement, IT is often enticed to upgrade storage technology sooner that it might need to. If the organization is experiencing a tough to identify performance problem, moving to an all-flash array is the equivalent of hitting the problem with a sledgehammer. The reality is, IT armed with the proper insight to storage resource utilization may be able to avoid a premature storage refresh.

In actuality, most data centers have more than one primary storage system available to solve performance problems. The problem is they can’t see across those systems to get a global view of performance and capacity utilization. IT professionals need to look for a storage dashboard solution that can point to performance hotspots so workloads can be moved to another system. The dashboard has to simultaneously identify utilization across systems and vendors. Because of virtualization both at the hypervisor and storage system, the dashboard solution also needs to provide an end-to-end view of resource consumption. An end-to-end view enables IT to identify which specific VM is excessively consuming resources.

Armed with this insight, IT potentially has several ways to solve resource shortages. A solution may be to move less important workloads to another system, giving the performance intensive workload more exclusive access to the system or moving the performance intensive application to a dedicated system. Another solution might even be using an internal flash drive for the application to use as a cache. A single SSD with caching software is far less expensive than an entirely new all-flash array.

At some point though, it will be time to refresh storage, either because resources can no longer be balanced or because a storage system has reached end of practical life. Without insight, IT has no way to know what the sizing requirements for the next system will be. Without that information, the default “strategy” is to buy as much capacity and performance as the budget allows. With the appropriate information, IT may realize that a better solution is to buy a smaller high performance system, a larger secondary storage system and software to automatically move data between the two.

Preventing premature upgrades and better planning for legitimate upgrades are two ways to keep storage costs under control. In our on demand webinar “3 Steps to be a Storage Superhero – How to Slash Storage Costs”, we discuss these methods in more detail as well as how to reduce storage capacity consumption.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,236 other followers

Blog Stats
  • 1,554,473 views
%d bloggers like this: