Robin Harris over at Storage Mojo recently wrote about the future of legacy storage vendors in the face of low-margin commodity hardware and a primal force in their industry – the cloud. He cites the fate of the minicomputer and the word processor (dedicated word processing machines) as evidence of the evolution that’s ongoing in the computer industry. In Colorado where I live, we’ve had firsthand experience with both of these examples. DEC used to have an enormous facility here, as did a very successful word processor company called “NBI”, which disappeared in the eighties just as Wang did.
I disagree a bit with Robin when he says this change is due to business models and not technology. I think it’s very much a case of technology, specifically the microprocessor. We’re seeing the storage systems that Google and Amazon essentially invented – driven by software but fueled by cheap server hardware running microprocessors – trickle down to the enterprise and smaller companies in the form of converged architectures and software-defined storage.
I do think Robin’s PC analogy is accurate, except that it’s a technology story as well, again fueled by Intel’s microprocessors that have evolved into the multi-core behemoths we see today. They’ve replaced the specialty CPUs that DEC and others developed for specific applications a generation ago, providing much more processing power than was required for the applications that used to run on these machines.
Storage is an application
I think Robin’s point here is key. Storage IS simply another application that runs on a computer, which is appropriate, since storage controllers are essentially computers. We might even ask if the proliferation of features and functionality that we see in storage systems (now running on those systems’ x86-based controllers) is the result of the abundance of CPU power that these servers-as-storage-controllers have brought to bear.
Robin asks a good question, whether vendors can find a way to make commodity-based storage systems, often sold as software-only solutions, good enough to truly replace the dedicated-controller storage systems that still run in most companies. I would say the commoditization of storage has been predicted since the late 90’s when DataCore and FalconStor both first came to market but hasn’t really happened. Also scale-out storage was supposed to rule the day, yet scale-up storage system tend to be the market share leaders.
My colleague, George Crump, recently wrote about the hidden expense of commodity-based storage and his thoughts further explain why traditional purpose built storage systems continue to hold the market share lead, at least for now.
Maybe the question to ask is whether the incumbent vendors and/or traditional storage systems can keep control of the data center. It’s here that I think the answer really is the business model instead of the technology. To stay relevant, these vendors will need to make the business case for support, simpler implementation, consolidation of features/functionality/services and the intangible reliability that comes from a trusted, enterprise supplier. They also need to explain why purpose built hardware still has its advantages but be prepared to offer a more commodity option when those advantage aren’t significant to the use case.
Increasing transistor density (Moore’s Law), in the form of faster, cheaper microprocessors, is behind many (maybe most) of the new products and new technologies that business and IT have seen. It’s replaced minicomputers with general-purpose servers, filled PCs with enough power to replace specialty machines such as word processors and enabled innovations like server virtualization to take off.
More and more dedicated systems are becoming software-defined functions of these same powerful, general purpose servers, but traditional dedicated storage systems are still dominant in most enterprise and large business data centers. In order to stay that way, these vendors will need to do both, leverage a better business model, and flex their technological muscle.