Software Defined Storage needs Automation and Orchestration

Software Defined Storage (SDS) provides the ability for multiple storage hardware elements to be managed through software, enabling a data center to have a common interface to pool, provision and protect storage assets across vendor platforms. This basic abstraction is a good start, it simplifies the storage environment and makes the administrator’s life easier but now SDS has to go further. To make the storage environment more “cloud-like”, next generation SDS solutions need to provide automation and orchestration.

Automation vs. Orchestration

Automation is the ability to have storage automatically adjust to conditions occurring in the environment. A simple example of this is when an SDS solution can allocate more flash storage to a data set that is seeing an increase in read and/or write activity. But it could also mean increasing or decreasing the level of data protection based on data activity. For example a production database that is accessed continuously may be synchronously replicated to a secondary storage system but a less active application may be asynchronously replicated. In short, the storage system should automatically adjust to the needs of the data it’s hosting without requiring administrator intervention.

Orchestration takes this concept one step further by extending automation across the broader environment. For example, storage performance and protection settings may be adjusted by conditions occurring in a OpenStack or vSphere environment. Imagine a future generation SDS communicating with software defined networking (SDN) to make sure that the network and the storage infrastructure are pre-programmed to deliver the intended service level.

Wrapping it up with QoS

Quality of Service (QoS) becomes the mechanism that drives automation and orchestration. QoS allows the IT administrators to set service levels for the application (Gold, Silver, Bronze), it then leverages automation and orchestration functions to maintain those service levels as the environment around the application changes. This includes placing the application on the right type of storage, interfacing with SDN to make sure the correct networking is provisioned, and with the hypervisor to make sure the right amount of CPU/memory resources are allocated. In the end, the goal of the next generation of SDS should be to do more than just reduce provisioning time, it should be to essentially eliminate it.

Adaptive Storage

Today when a new application is brought online careful consideration has to be given to the storage infrastructure. Should the virtual machine be stored on an all-flash array, hybrid arrays or a standard disk array? Also does the entire virtual machine need to be on the same class of storage or do just parts of it need to be on high performance storage? The next generation of SDS, based on policies, will make these decisions automatically and instantly.

With the next generation of SDS, provisioning a virtual machine should be merely setting a service level and a capacity restriction. The storage infrastructure should then, in coordination with the virtual infrastructure, adapt to the demands of that virtual machine and the application that resides in it.

Want More?

To learn more about the next generation of SDS attend our live webinar “Three Reasons SDS Needs to go Back to School”. In this webinar Andrew Flint, VP of Marketing at ioFABRIC will join me to explain why SDS has more to learn so it can deliver the next level of efficiency to the data center. We will detail automation, orchestration, QoS, and intelligent learning of previous storage spends. We will also provide example of how these capabilities will virtually eliminate daily storage management tasks.

Our live event is on Wednesday, June 24th at 11am ET and 8am PT. Start your morning or have lunch with Storage Switzerland to learn why SDS needs to go back to school. If you pre-register for the webinar this week you will receive an advanced copy of Storage Switzerland’s latest white paper “What is Software Defined Storage 2.0?. It is not available anywhere else so pre-register to get your copy emailed to you today.

Click To Register

Click To Register

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,215 other followers

Blog Stats
%d bloggers like this: