The “software-defined data center” (SDDC) is hailed by many as the data center architecture of the future – promising to bring new levels of hardware utilization and a simplified, public cloud-like user experience on-premises. Previously, Storage Switzerland detailed the key potential merits of SDDC architecture. This potential value is establishing a common fabric for data and workload mobility across heterogeneous infrastructures. In this installment, we will explore in more detail the core precursor to the SDDC: server virtualization.
What is Server Virtualization?
Virtualization is the act of decoupling physical hardware resources, such as processing power and memory, from a single operating system and then presenting those resources as a shared pool that can be used by multiple virtual operating system instances. The software that performs the abstraction is typically called a hypervisor and it lies at the heart of most SDDC architectures today. The concept of virtualization got its start in the early data center, initially with the advent of the mainframe, and then becoming more mainstream in the early-to-mid 2000s. With virtualization, each application no longer required its own server to run. Applications could now run in their own virtual machine (VM), and multiple VMs could coexist on and share the resources of the same physical server hardware.
What are the Benefits and Drawbacks of Server Virtualization?
Virtualization fundamentally changed the economics of the server stack for a number of reasons. First, it enabled server hardware resource sharing. It is far more cost effective and space efficient to have a single server driving a dozen VMs than to have a dozen bare metal servers dedicated to twelve different applications. It also enabled VM portability; virtual machines can easily relocate to another physical host, in most cases transparently. This mobility makes commodity servers more viable since if one fails then the VM can easily be restarted on another physical server. In addition to driving down server capital expenditure (CapEx) costs, fewer physical servers also reduced power and cooling costs. Arguably even more significant, server virtualization also reduced IT management overhead. Virtualization reduces the number of systems that need to be managed, which makes provisioning a far less manual process, and often times, virtualization software providers roll out updates automatically.
Another benefit of virtualization is that it accelerates resource provisioning and makes it easier to facilitate global resource sharing and file access. These capabilities benefit production environments, and they can also improve disaster recovery and business continuity. If a hardware system goes down, VMs can be quickly migrated to a different system. Failover can occur faster and with fewer IT resources. The consolidation of potentially millions of server files into a single VM “package” also made data protection easier and faster.
For its benefits, server virtualization does come with some potential drawbacks to consider. The shift from a bare metal to a virtualized implementation may require additional or different hardware across the stack, such as storage and networking. It may also require a learning curve on the part of IT (although these are lesser considerations today as virtualization is now mainstream). To function correctly, it may also take additional time in terms of upfront implementation, because you are relying on additional networking to perform the same tasks as on a bare metal system.
A potentially bigger challenge is guaranteeing consistent performance. Since most resources are shared, IT needs to take steps to ensure that mission or business critical applications get the performance and capacity they need. IT planners should look for Quality of Service (QoS) capabilities throughout the stack as well as ways to simplify the administration of QoS across the stack.
The key takeaway is that virtualization is not going to be a fit for every use case. For example, bare metal systems still offer faster performance because they do not carry the CPU overhead of the hypervisor. It is important to understand how your application will perform, and the overall total cost of ownership of shifting to a virtualized environment.
How Has the Server Virtualization Market Developed?
Over the past 15 years, server virtualization has developed into a mature and proven technology. The large majority of servers currently in production are virtualized. Enterprises have been keen to extend virtualization across their data centers, to bring benefits including greater agility and resource utilization, as well as simplified management to their storage and networking implementations. As a result, we are seeing a shift from point server virtualization implementations to those integrated with virtualized storage and networking capabilities – largely accelerated by the advent of hyperconverged infrastructure (HCI).
HCI architectures take a virtualized server node, and virtualize the direct-attached storage that is associated with that node. Many HCI systems today also integrate support for software-defined networking. HCI has been notable in simplifying and accelerating the path to storage virtualization, and to a more recent degree, network virtualization. Storage Switzerland will cover storage and network virtualization in more detail in forthcoming blogs.
In addition to the shift to HCI, we are also seeing increasing adoption of open-source, Linux-based hypervisors, as some enterprises look to cut costs and to achieve additional flexibility. Additional software licensing costs is arguably the biggest pain point associated with server virtualization. Additionally, flexibility is required in today’s age of highly heterogeneous infrastructures. When it comes to performance and enterprise-grade capabilities, such as live VM backup and migration, by and large many of these Linux “alternatives” have caught up to the “incumbents.” The problem is that getting started with open Linux-based solutions is more complex.
Where do Containers, Serverless Computing and Modern Applications Fit In?
Server virtualization is well-entrenched across practically every enterprise IT environment today, typically serving a wide range – and the majority – of applications. One of the emerging problems however, is that modern applications are very different from the legacy application set. Modern applications do not carry the predictable resource needs that define legacy applications. They also require far greater portability and iteration. Containers have begun emerging, often as environments independent of VMs, to address these requirements and to increase efficiency.
Whereas a virtual machine virtualizes and emulates the components of a hardware system, containers virtualize the operating system (OS). While each VM requires its own OS instance, multiple containers may run on the same OS instance. VMs take a longer time to start because each VM reloads the operating system. Comparatively, thousands of containers can start in seconds. This approach also enables increased hardware resource utilization. Additionally, an application may be deployed across multiple containers, providing increased opportunity for isolation. This makes large applications easier to upgrade and provides isolation such that a code change may only impact a part of an application, for example.
In addition to the potential benefits previously listed, containers may address some of the challenges associated with virtualization that are arising in the modern, software-defined data center. These challenges include the failure to fully address the pain points associated with the differences between development and production environments. This includes the management of both legacy and modern applications as well. Containers can facilitate a consistent environment for development, testing and production. Additionally, containerizing a legacy application can make it more compatible with the public cloud. That being acknowledged, IT professionals should be aware of their implications to security, performance and compatibility, whether on bare metal or virtualized infrastructure. For example, if not implemented correctly, one container may hog resources or spread malicious code to other containers. Additionally, a container might need to run on top of a VM to be compatible with a public cloud service.
Conclusion
Server virtualization is well-entrenched as a core technology underpinning both SDDC and private and hybrid cloud environments. Over the past 15 years, the greater agility, cost savings and flexibility enabled by server virtualization, compared to full bare metal environments, has fundamentally changed the data center market, and is inspiring the shift to virtualization across the data center stack (a la HCI).
That being acknowledged, we are in the midst of another market evolution that is driven by greater application volatility, agile development, continuous delivery, and IT infrastructure heterogeneity. Against this backdrop, server virtualization may no longer be the default best fit for the vast majority of workloads. New technologies, including containers, and from an even more future-looking perspective, serverless computing, are emerging. For most data centers, it is likely that these technologies will need to coexist alongside server virtualization for the foreseeable future, in order to most effectively support the changing workload set.