The State of the Software-Defined Data Center

The concept of the “software-defined data center” (SDDC) emerged around 2012, and since then the term has been adopted by nearly all vendors and heavily debated by analysts. While some argue that it is nothing more than a “marketecture,” and a largely unfulfilled and currently unattainable vision, others tout true capabilities and value. In this blog series, Storage Switzerland will dissect what truly constitutes a SDDC, how real of a vision it is, and the various components of a SDDC infrastructure stack.

What Is the Software-Defined Data Center?

Put in simple terms, the SDDC builds upon concepts set by server virtualization; abstracting away the boundaries of physical hardware and enabling more dynamic, self-service provisioning. Its goal is enabling delivery of the entire infrastructure stack, as an application, to the end user. Storage Switzerland will provide more in-depth details about the core elements of the SDDC stack (server virtualization, software-defined storage, software-defined networking and data protection) in forthcoming blogs.

In both concept and functionality, the SDDC is a byproduct of the introduction of public cloud services. The major commercial and consumer public cloud service providers (CSPs) including Amazon, Facebook and Google, required a new data center architecture to be able to cost-effectively deliver a range of services, to a myriad of users, on demand and on a global basis. Legacy approaches grounded in expensive, inflexible and proprietary, hardware-based appliances could not provide the multi-tenancy or the levels of resource agility, elasticity and utilization needed to deliver public cloud services globally and at scale. Instead, these CSPs required a distributed, massively scale-out infrastructure that could run on lower-priced hardware components. Equally important, the CSP infrastructure needed to be easily managed by a comparatively small IT team. To optimize responsiveness and cost efficiency, it is necessary to be able to provision services quickly and with minimal (or ideally zero) intervention from IT managers. In other words, these CSPs required a data center infrastructure fabric that could be quickly and easily partitioned into much smaller units on a self-service basis. It also needed to run on low-priced hardware components that could be easily scaled and swapped out.

The SDDC architecture decouples key infrastructure operations, including initial provisioning and ongoing configuration and management, from server, storage and networking hardware. Hardware resources are pooled and then automatically allocated and delivered as-a-service on a workload-specific basis. They are managed commonly, through a centralized application programming interface (API).

The core objectives of a SDDC are:

  • To open up freedom of choice in terms of underlying hardware resources, for improved return on investment by better tailoring the hardware itself to the organization’s unique requirements.
  • To facilitate a more efficient data center footprint through increased hardware resource utilization and reduced power consumption. Multiple workloads may run on the same infrastructure at the same time (and in fact, an application could even theoretically be served from multiple data centers). Systems may be turned off when not needed, and they may be integrated into the SDDC fabric as they are needed, (reducing the need for over-provisioning).
  • To accelerate application provisioning, by allocating infrastructure resources more quickly and more dynamically as new workloads are spun up or as workload needs change.
  • To minimize management complexity, by obtaining common, programmable oversight over hardware resources and enabling end users’ procurement of resources independently of IT. Furthermore, with the introduction of more intelligent management capabilities in the vein of artificial intelligence for IT operations (AIOps), workloads can be rebalanced based on insights into SLA requirements and the state of underlying infrastructure. Factors such as which systems are experiencing the best availability, which are at risk, quality of power supplies, etc., help determine how to rebalance the load. This saves the company money, accelerates application performance, and increases application uptime. These capabilities are increasingly important as SDDCs provide the fabric to connect multi-cloud environments.
  • To improve elasticity and scalability. Hardware resources may be more accurately “right-sized” according to current workload demands. What’s more, a software-defined approach provides for more consistent and optimized application performance as the environment is scaled out.


The vision of the SDDC cannot be fulfilled by single type of infrastructure, largely because one type of infrastructure does not meet all application or all business requirements. Bare metal, virtualized and containerized infrastructures, as well as a multitude of on-premises and cloud-delivered resources, are all discrete parts of a complex infrastructure puzzle. One of the biggest values of the SDDC lies in creating a common infrastructure fabric so that data and workloads can traverse across these various infrastructure types. Meanwhile, it is encouraging to see the ongoing development of more agile, flexible, and better-utilized infrastructure that can adapt more granularly and more fluidly to changing business requirements.

Even beyond the technology adaptations that must take place, arguably the most significant impact of the SDDC is that it will force IT organizations to adapt. With greater automation, the role of the IT professional can shift from day-to-day hardware management and troubleshooting, to focusing instead on overseeing service activation and delivery in the context of service level agreements. Additionally, it will contribute to a change in the total cost of ownership (TCO) equation, with an emphasis on service metering and billing over capital expenses.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,542 other subscribers
Blog Stats
%d bloggers like this: