Designing Storage for the Software Defined Data Center

The goal of a Software Defined Data Center initiative is to enable the organization to increase an organization’s flexibility by programmatically configuring and reconfiguring their environment with a series of software commands instead of interacting with hardware. The SDDC leverages standard hardware to drive down the overall cost of the data center. While computing and networking resources are progressing toward software defined futures, storage has been stuck in legacy architectures, ill-suited for the SDDC.

Watch On Demand

The software defined data center addresses scaling by adding more hardware elements but having those elements managed as if they were one, through the use of software. The computing tier provides element management through hypervisors like VMware and Hyper-V, and container technology through Docker or Kubernetes. Using software defined networking, organizations can implement multiple vendor switches and have those switches act as if they were one switch.

The Hardware Problem Facing Software Defined Storage

Most software defined storage (SDS) solutions stop at abstracting the storage software from the storage hardware. The software is installed on standard servers that act as the control plane. The existing storage systems are connected to the control plane and their existing software stacks are deactivated. While the abstraction does give the organization some sense of independence from the storage hardware, it does not provide the full complement of automation tools the organization needs to deliver storage architectures flexible enough to meet the expectations of the SDDC.

While it may seem attractive to leverage existing storage resources, the organization pursuing a SDDC strategy is often better served by moving off of those legacy storage systems. These platforms are all sold at a premium and most don’t have the option to buy them without their software complement, which means that the customer is paying for two products that do the same thing. It is also very difficult to scale across the multiple independent systems.

Rethinking Storage for the SDDC

The organization needs to rethink the storage architecture that supports the other components of the SDDC. The architecture should leverage standard servers instead of legacy storage systems, and storage media should be placed inside the storage servers, including flash and disk to create a dedicated storage node. Then the environment can automatically scale as new storage nodes are added.

The SDS software should also be aware that nodes added to the cluster will change over time. Newer nodes will have more compute available, more than likely more flash available and some will have more capacity than before. The SDS software needs to factor node variability into its load balancing factors. It should know which workloads to put on the different nodes based on the SDS’ analysis of these nodes.

StorageSwiss Take

As companies march toward the SDDC they are replacing the computing tier and the networking tier with new commodity hardware that is open to being controlled by software. The storage tier should operate the same way. Instead of leveraging existing legacy storage systems, the organization should consider the cost advantage of a storage infrastructure that infinitely scales using standard server hardware and storage media instead of being dependent on existing storage systems that are expensive to maintain and upgrade.

The hardware used is just the beginning. In our next blog we’ll discuss the importance of automation and orchestration, so the storage architecture can enable the SDDC to fully live up to its promise. In the meantime, learn more about how to design storage architectures for the software defined data center (SDDC) by registering for our on demand webinar, “Overcoming the Storage Roadblock to Data Center Modernization.” Attendees can download a copy of our exclusive white paper “What Happened to the Software Defined Data Center?

Watch On Demand

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,841 views
%d bloggers like this: