The Dirty Little Secret of Storage TCO

The average data center refreshes their storage system every three to five years. But organizations don’t start this process because the calendar tells them to. Something happens in the environment that forces that upgrade, typically the storage system failing to meet the performance and/or capacity demands of the organization. Occasionally, there is a specific software feature or hardware advancement that requires a new storage system. The worst reason is when a vendor implements “technology obsolesce”, an approach that makes the out-year maintenance of the existing system so expensive the customer is forced into buying a new system.

Technology obsolesce is a case of vendors acting in their own best interest instead of that of their customers by forcing the customer to upgrade storage systems or controllers. The motivation for this upgrade can be that the storage system has reached is total allowed storage capacity, it does not support a new drive interface such as SAS II or a new networking interface like Fibre Channel over Ethernet (FCoE). Again, the worst example of force upgrades is when a storage system that is still functionally operational and meeting all of the customer demands but the non-warranty maintenance costs are so high that it is cheaper for the customer to buy a new warranted system.

In an attempt to better prepare for a future storage refresh many IT planners have purchased storage systems with that refresh capability built into them. This typically comes in the form of data-in-place upgrades where existing storage shelves can be attached to new controllers that are capable of driving greater performance and capacity or have the capability to support new features and hardware technologies.

Storage Software Seizure

What comes as a shock to IT professionals is the cost of the software that goes along with these new systems. Most vendors, when they sell a new controller, also require that the customer buy new software to go along with it. There is seldom “portability” of software licenses between controllers, even though the vast majority of software is basically the same between one generation of controller and the next.

It is important to note that most organizations are already paying for a software maintenance agreement. It stands to reason that if the software has incremental enhancements then the organization is already entitled to the upgrade anyway. Forcing the customer to pay maintenance and then pay again for software for a new controller is really egregious.

The problem with this “status quo” approach is that the customer has to re-buy the storage system software all over again, even though most of it has not changed. The software is merely running on a different controller but the snapshot, replication and tiering components remain the same. It seems odd that the customer has to pay for the storage software again. There has to be a better way. After all if you buy a new laptop, replacing your old one, you don’t have to buy another license of Microsoft Office, you simply re-download the software and install it, all while enjoying faster performance.

Taking Another Look at Storage TCO

According to a recent Enterprise Strategy Group (ESG) survey, the two most important factors that customers consider when selecting a storage system are upfront purchase price and long-term total cost of ownership (TCO). Addressing the first concern, price, often requires that storage vendors focus more on software than hardware. Instead of carrying the burden of storage hardware development, it makes more sense for them to leverage off-the-shelf hardware. This should lower their costs and in turn lower purchase price for the customers.

The focus on upfront price also means that storage system vendors give into the temptation of providing a heavy discount on upfront price and then focus on other ways to extract revenue from their customers. Making them buy new software when they upgrade hardware is one example.

To address the second problem, long-term storage TCO, requires changing the software licensing status quo. However, because many storage hardware vendors are becoming more software focused, it makes addressing software licensing issues more difficult since they are counting on software repurchasing revenue streams to bolster profits. This means that storage vendors will be unlikely to change the current software repurchase cycle and every 3-5 years the IT organization will have to essentially repay for software they already own in order to support the new storage hardware they are purchasing.

This is an odd state of affairs. The drive to a more software-focused entity that’s become so popular in IT should leave storage companies less tied to hardware, not more so. Why can’t the software be portable across different storage controllers, especially if those controllers are running the same software version?

Certainly there is a need to upgrade hardware. More processing power and faster I/O channels should be leveraged so that storage systems can keep pace with organizational demands for more performance and capacity. But these upgrades should be as granular as possible. For example in many cases storage controller CPU power and memory utilization is more than adequate. In most cases it is the interfaces to hosts and to drive technology that changes (FCoE and SAS-II as examples). Ideally the system would allow for just the components that need to be upgraded to be changed, not the entire system.

This type of hardware refresh model will require that a different type of licensing model be developed, one that allows the customer to carry their software investment forward across generations of hardware. Certainly, if new software functionality is added, it should be paid for; but there shouldn’t be a requirement to pay for old functionality a second or third time.

Rethinking the Storage Refresh

This reality should lead IT planners to reconsider the way they calculate the storage refresh as well as the amortization time allocated to a storage investment. As stated earlier the cause for a storage refresh typically comes down to one of four factors; the lack of storage performance, the lack of storage capacity, a desire for a new storage feature or the high cost of renewing the warranty of the storage hardware. All four of these factors are often bundled up by the incumbent storage vendor to make a compelling case for a complete storage system replacement. But in reality this creates a very complex storage refresh project out of a simple incremental upgrade. Ideally, these four factors should be unbundled, allowing the customer to address them as their needs dictate.

Storage performance and capacity can be addressed by upgrading the storage devices (from hard disk drives to solid state drives), new interfaces (SAS-II) or by upgrading the controllers. When this upgrade is needed the vendor should allow the current storage software to be moved from the old system to the new one, so that the customer only has to pay for the raw hardware. The desire to add a new storage software feature should be addressed by upgrading or adding a module to that software not by total software replacement.

Finally, the practice of charging a premium to provide maintenance on storage hardware beyond the initial warranty should simply come to an end. While some additional upcharge is reasonable so that older parts can be kept on hand, the current upcharge commonly practiced by the industry is unacceptable.

A Solution

Vendors need to move to a model that unbundled the factors driving storage upgrades. An excellent example can be found in Dell’s perpetual licensing of its storage software. In this model the storage software purchase is separated from the hardware purchase and can be carried to a future storage hardware system when it is purchased.

Given Dell’s stance as a software first storage company this approach makes sense and gives the customer some much needed flexibility. It is somewhat ironic that this more modern, customer friendly licensing model comes from server manufacture Dell. The benefits of this model can be seen in Dell’s claim that their customers average a 6-7 storage system refresh cycle instead of the more common 3-5 years. The combination of perpetual licensing and long storage system retention should lead to optimal TCO. It is critical for IT planners to not get solely focused on upfront acquisition costs. While the initial purchase price is of course important the cost of upgrades to both software and hardware can far outstrip the initial purchase price. The ability to get an extra 4 to 5 years of usefulness out of that original investment should also be factored in.

This Article Sponsored By Dell

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , ,
Posted in Article
One comment on “The Dirty Little Secret of Storage TCO
  1. Vendors like Exablox have already overcome this barrier with their BYOD(isk) strategy and their scale out platform. Nutanix has done the same. In both solutions their nodes come complete with software and controllers that last the lifetime of the hardware. As their technologies advances, you simple add additional nodes which integrate across the entire cluster, without having to endure painful upgrades.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,206 other followers

Blog Stats
  • 1,527,609 views
%d bloggers like this: