Storage remains the thorn in the side of IT as it attempts to meet the expectations of the organization. According to a study commissioned by SUSE, 70% of IT departments say their current storage strategies are not keeping up with the exponential growth of data and the demand for faster and faster responses. Ninety five percent of those organizations are looking to software defined storage to help them pick up the pace.
Why Software Defined Storage?
Software Defined Storage (SDS) abstracts the storage software from the physical hardware it runs on. It allows the use of commodity storage components while delivering the same enterprise feature set organizations are accustomed to. Open SDS solutions, like those offered by SUSE, take SDS a step further by building on an open software platform.
The abstraction of software from the hardware enables IT more flexibility in hardware selection while driving down costs. But the type of SDS IT selects has to match the long term goals of the company. A turnkey solution may seem like a fast track to a software defined future, but the solution may later slow down future scalability and limit flexibility.
Why SDS now?
While it may have had different names, SDS as a concept has been around for a while. Why does SDS suddenly make sense to 95% of those who think their strategies are not working?
There are several factors. First is the need. Organizations have an incredible ability to create data. They also have a need to process that data more quickly. Speed and scalability of storage is now critical for the organization to remain competitive.
Budgets, by and large, have not kept pace with the storage need, and frankly they can’t. Few organizations can afford to spend at the same rate they create data. That means enterprises require an alternative that is less expensive and performs at least as well.
The second factor is SDS is architecturally sustainable. When SDS was first introduced as “storage virtualization,” CPU horsepower came at a premium. That horsepower is now plentiful. The widespread adoption of server virtualization has also helped both in terms of understanding the concept and in SDS leveraging server virtualization to run its code.
Today, a virtualized server environment can easily run SDS software components. Enabling the architecture to play dual roles hosting applications and storage. It also becomes a selection criteria for SDS. Can the SDS component run in a virtual machine, and more importantly can the architecture be run stand alone as the data center needs to scale?
By far the biggest cost of storage is its management. The cost to make sure the right performance and capacity is available to the right applications at the right time is critical to meeting the organization’s objectives. But the data center today is increasingly fragmented, especially when it comes to storage. Several studies indicate the typical large data center has anywhere from five to a dozen different storage systems from different vendors.
SDS solves the fragmentation problem by centrally managing the existing storage assets and creating a scalable strategy going forward, where inexpensive server nodes are added to the storage architecture. A single point of management means a single method to provision volumes, snapshot data sets and replicate data to remote systems.
The time for SDS is now, and almost every vendor is claiming some form of storage as software offering. The next step for IT is to decide which type of SDS makes sense for them. A turnkey approach may provide a faster start but limits long term flexibility. An open approach provides flexibility not only in hardware selection but also the software itself.