Storage Opinion: Storage Evolves thanks to the Processor not the Business Model

Robin Harris over at Storage Mojo recently wrote about the future of legacy storage vendors in the face of low-margin commodity hardware and a primal force in their industry – the cloud. He cites the fate of the minicomputer and the word processor (dedicated word processing machines) as evidence of the evolution that’s ongoing in the computer industry. In Colorado where I live, we’ve had firsthand experience with both of these examples. DEC used to have an enormous facility here, as did a very successful word processor company called “NBI”, which disappeared in the eighties just as Wang did.

I disagree a bit with Robin when he says this change is due to business models and not technology. I think it’s very much a case of technology, specifically the microprocessor. We’re seeing the storage systems that Google and Amazon essentially invented – driven by software but fueled by cheap server hardware running microprocessors – trickle down to the enterprise and smaller companies in the form of converged architectures and software-defined storage.

I do think Robin’s PC analogy is accurate, except that it’s a technology story as well, again fueled by Intel’s microprocessors that have evolved into the multi-core behemoths we see today. They’ve replaced the specialty CPUs that DEC and others developed for specific applications a generation ago, providing much more processing power than was required for the applications that used to run on these machines.

Storage is an application

I think Robin’s point here is key. Storage IS simply another application that runs on a computer, which is appropriate, since storage controllers are essentially computers. We might even ask if the proliferation of features and functionality that we see in storage systems (now running on those systems’ x86-based controllers) is the result of the abundance of CPU power that these servers-as-storage-controllers have brought to bear.

Robin asks a good question, whether vendors can find a way to make commodity-based storage systems, often sold as software-only solutions, good enough to truly replace the dedicated-controller storage systems that still run in most companies. I would say the commoditization of storage has been predicted since the late 90’s when DataCore and FalconStor both first came to market but hasn’t really happened. Also scale-out storage was supposed to rule the day, yet scale-up storage system tend to be the market share leaders.

My colleague, George Crump, recently wrote about the hidden expense of commodity-based storage and his thoughts further explain why traditional purpose built storage systems continue to hold the market share lead, at least for now.

Maybe the question to ask is whether the incumbent vendors and/or traditional storage systems can keep control of the data center. It’s here that I think the answer really is the business model instead of the technology. To stay relevant, these vendors will need to make the business case for support, simpler implementation, consolidation of features/functionality/services and the intangible reliability that comes from a trusted, enterprise supplier. They also need to explain why purpose built hardware still has its advantages but be prepared to offer a more commodity option when those advantage aren’t significant to the use case.

StorageSwiss Take

Increasing transistor density (Moore’s Law), in the form of faster, cheaper microprocessors, is behind many (maybe most) of the new products and new technologies that business and IT have seen. It’s replaced minicomputers with general-purpose servers, filled PCs with enough power to replace specialty machines such as word processors and enabled innovations like server virtualization to take off.

More and more dedicated systems are becoming software-defined functions of these same powerful, general purpose servers, but traditional dedicated storage systems are still dominant in most enterprise and large business data centers. In order to stay that way, these vendors will need to do both, leverage a better business model, and flex their technological muscle.

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , ,
Posted in Blog
3 comments on “Storage Opinion: Storage Evolves thanks to the Processor not the Business Model
  1. BOTH a business model change and a technology change is needed; neither one alone is sufficient. The business model change is needed because IT is changing. The technology change is needed because legacy products cannot support the new business models. They lack the necessary flexibility, elasticity, cost basis and, where applicable, multi-tenancy.

    We will continue to announce customer proof points showing exactly this; software-defined storage, sold as a service (on-premises!), displacing the traditional, purpose built, CapEx- and OpEx-intensive products.

  2. Tim Wessels says:

    Well, things that are initially hard to do are replaced by things that are easier to do. When multi-MB hard disk drives were first introduced it was a tough technology challenge. Disk drives had to be “married” to their disk controllers. Low level formatting (using firmware on the controller) of disk drives with the controller “consumated” the marriage. Today you plug in a multi-TB disk drive to a SATA connector on your computer and you are done. The emphasis in storage controllers has shifted from a proprietary hardware/firmware disk drive controller combination to software being run on a generic computer with sufficient processor speed and memory. Actually, the disk drives themselves now have their own processors and over a million lines of code running on them. The changes in storage sofware from closed and proprietary to open and shared is changing the business model for storage. Commodity storage systems and sofware may not be able to replace “bullet-proof” primary storage systems for some data storage applications but the trend seems to support their ability to eventually do so.

    • Eric Slack says:

      Great points Tim – we shouldn’t take all the technology that’s now packed into disk drives and the ‘rampant standardization’ for granted either. They’re certainly enablers for the evolution we talked about in this piece. Thanks for your insight.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,485 views