An organization tends to review its storage testing capabilities as it starts its next round of storage refresh. Typically they will either do the hard work of assembling a lab that provides a small representation of their production environment, leverage vendors to assemble a test to their specifications or leverage open source utilities to test the new systems. Each of these approaches are essentially “renting” the testing process and are popular because most organizations see buying their testing capabilities as too expensive for an infrequent process.
“Buying” your storage testing process is really just establishing a permanent approach to the change validation and refresh process. A permanent process for evaluating new storage systems and testing all changes and updates could be a fully equipped lab, or investing in a workload simulation appliance. Both are always available and provide a more real world testing facility then do the rented processes.
Is buying a storage testing process really more expensive though? By establishing a permanent testing capability, organizations can not only be prepared to test new gear as it comes online, they can also test current hardware to see how far the organization can stretch its capabilities.
For example, many times when an organization is adding a new application the assumption is a new storage system needs to come with it. If the organization owned a storage testing platform it could simulate the additional load on the storage system to see how it will handle it. Even if there is an appearance that storage system will hit a performance wall, IT may find the addition of more flash storage to the storage system or a cache card in the application’s server will address any performance bottlenecks and allow for years of additional service.
In this situation, owning a storage testing platform saves the organization money by eliminating the premature purchase of new equipment in addition to the impact to the business of improperly sizing the system. In a worst case scenario, the organization at least knows what the limits of the storage system are and when they will occur.
Another problem is the constant flow of software updates from the storage and switch vendors. Every time they require or want you to implement a software upgrade/update there is a significant chance that something will go wrong. Storage infrastructure testing prior to rolling out updates and patches is the only way to ensure performance degradations are not introduced.
There are also situations that arise where IT is offered a “deal” on a storage system, either because it is the end of a quarter or a new vendor is trying to “buy” some attention. This is not a storage refresh but simply a point purchase. Most of these purchases are made on hope – hope that it will alleviate a performance or capacity concern. But no matter what the price, there is no value if the new system doesn’t live up to the expectations or is unreliable.
Owning a storage testing platform allows IT to test the vendor’s claims in a short period of time so it can still take advantage of the opportunity while at the same time make sure it is not wasting the organization’s money on an unreliable product.
Thanks to data center modernization efforts and the rapid pace of innovation by storage vendors, refreshes of storage infrastructure occur faster than ever. Storage refreshes should evolve from a once every three year event to a continuous process that is constantly exploring the limits of the existing infrastructure while at the same time being full prepared to test new products as they become available.