Briefing Note: The Next Generation Data Center demands SDS 2.0

The next generation data center requires a fundamental change in the way we implement and manage storage. IT professionals who are trying to implement these various changes have two basic options: Clean Slate or Augment. The problem is that both of these approaches bring challenges of their own. Software Defined Storage (SDS) could be the better choice, but it needs to evolve to SDS 2.0 in order to meet all the demands of the next generation data center.

The Clean Slate Challenge

Starting with a clean slate sounds like a good idea until you are the one that has to do the actual cleaning. This involves removing old systems, migrating data and learning new methods for protecting data. And of course, it is expensive to throw out what you have and replace it with something new. Especially when that new thing is a memory based storage system like a hybrid or all-flash array.

The other problem with a clean slate approach is that the slate never really stays “clean”. Eventually a need arises for an application specific storage system or some type of unique new feature just has to be purchased.

The Augmentation Challenge

The second option, augment what you have, is attractive. It is anti-change and allows the IT planner to buy specific storage solutions for specific problems. But it just can’t scale. Having multiple storage systems for specific use cases all being managed by a separate storage system software makes IT less efficient and is prone to mistakes.

The SDS 2.0 Solution

SDS seems to be the obvious answer. It allows the IT planner to gradually transition, leveraging existing hardware, but unifying under a single interface. The problem is that first generation SDS does either too little or too much.

First generation SDS does too little in that it provides the same basic functionality that already exists in existing arrays like snapshots, provisioning and replication. This is also too much. These features are already in place and work well.

SDS 2.0 needs to follow the VMware model of containerizing data on a per application or workload basis so that it is more manageable. First, it needs to utilize all forms of storage from DRAM to the cloud. Second, it needs to optimize the use of these forms of storage so that the most active data is kept on the highest performance storage available, and cold data is automatically moved to the least expensive storage available. Finally, and potentially most importantly, it needs to assure data integrity and availability by making sure that real time and near-real time replicas are available for instant access in case of storage component failure.

Products like ioFABRIC’s Vicinity promise to re-define SDS and deliver the next generation storage infrastructure to next generation data centers.

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,246 other followers

Blog Stats
  • 1,564,419 views
%d bloggers like this: