Rack Density – The Key to Maximum Virtualization ROI

Data center floor space is an increasing concern for IT managers. Once the walls of the data center have been reached, the only options are to either decrease the amount of devices in the current data center or to build a new data center. Many IT Managers are counting on virtualization efforts to maximize data center floor space, but if maximum rack density cannot be achieved then the virtual infrastructure can only provide temporary relief from data center overcrowding.

Server virtualization projects have generated a tremendous return on investment (ROI), and now desktop virtualization is offering similar promises. As these projects begin to scale however, much of the original ROI may be lost as their supporting infrastructures begin to consume data center floor space. Maximizing rack density should become a key design criteria so that the initial ROI gains of virtualization projects can not only be protected they can also be extended.

Virtual Infrastructures Take Space

Data center floor space savings is often cited as an area for gain in a virtualization project. This is especially true in the initial implementation of the project when older physical servers are decommissioned as they are virtualized. Once the initial virtualization effort is completed and those older physical systems are virtualized the next wave of virtual machines comes from satisfying the demand for net new server instances, which can now be delivered faster than ever. As a result virtual infrastructure begins to consume data center floor space quickly.

This data center “land grab” is to satisfy an ever-growing population of virtual servers and to make sure those virtual servers meet performance requirements. Trying to address these needs with traditional data center technology requires space, larger physical hosts, more network paths, and storage systems with more drives and controllers.

The rack space required by the physical host is not driven by the processor. The physical size of the processor is largely unchanged. Intel has perfected the process of squeezing more cores in the same processing space. As designs begin to focus on maximizing the numbers of Virtual Machine (VMs) per rack and maximizing rack density, the processing capability of the rack is rarely an issue.

The first challenge when implementing virtualization into a legacy supporting infrastructure is the size increase of the physical host. The physical host needs to support more network and storage interface cards which requires more PCIe slots which requires larger power supplies and fans.

As mentioned above traditional physical hosts require a large number of network and storage connections to provide the I/O bandwidth required to provide acceptable levels of performance to the VMs they support. A common connection is two to three quad port 1GbE interface cards although a slow move to dual ported, dual 10GbE cards is occurring. Additionally most hosts will require redundant fibre connections to storage.

This means again the host has to be large enough to support four to six PCIe cards. It also means that the cabling complex coming out of a cluster of four or five servers can consume space and create heat issues in the rack. Free space may have to be allocated in the rack to allow for proper air flow. The cabling complex will also require top of rack switches which will again consume rack space.

In order to achieve maximum VM density, these physical hosts also need to support a large amount of physical RAM. The maximum available today is typically about 256GBs of RAM. Again RAM DIMM slots require space, and in most cases 256GBs of DRAM will not deliver VM densities that will take full advantage of today’s and future processors. Additionally the expense of 256GBs or more of DRAM would also threaten the TCO of the infrastructure.

Finally there is the associated storage that will support the virtual servers on the physical hosts. Traditionally the only way to take full advantage of the advanced hypervisor features like VMware’s virtual machine and storage vMotion, distributed resource management (DRS) and site recovery management (SRM) is with a storage area network (SAN). This means that the internal storage slots in the physical hosts goes largely unused. It also means the complexity and rack space consumption of a shared network device is required.

From a rack space perspective the shared storage system typically has two components. The first is a storage controller, the engine that moves I/O between the virtual machines and the storage media. Next there is the space required to house the physical media, often 3U (rack units) shelves of disk drives, with enough capacity and performance to support the demands of those VMs.

A popular alternative solution to the storage controller/media problem is scale out storage which is assembled from smaller servers called nodes. Each node has storage controller processing power and storage capacity. The nodes are then clustered together to provide a single storage pool. The challenge with these alternatives is that they still of course consume rack space. They are not very dense in their own right often consuming an entire rack U to add four drives to the storage system. Also having processing power for each node may be too much for just storage control. The scale out storage system’s processing power often goes wasted.

In both cases the goal of a densely-packed virtual server rack often becomes unachievable, especially when the space required for the storage system is factored in. In most cases two rack columns are required: one for storage and one for processing.

A New Paradigm for Rack Density

To achieve maximum rack density is going to require a new hardware paradigm that is optimized for the virtual server and desktop infrastructure. This new paradigm needs to converge the CPU, network and storage resources as much as possible in order to optimize rack space. This requires more than the proposed converged architectures offered by some of the larger storage and server hardware vendors. These are really a prepackaging of legacy designs rather than new designs all together. As a result they suffer the same space wasting issues that were described above.

Companies like Nutanix promise to maximize rack density through new hardware designs and the elimination of a separate storage cluster. These truly converged architectures use custom-designed nodes that have all the required compute, memory networking and storage resources built into a single, small, specialized node that can then be clustered with other nodes to deliver a highly optimized rack efficient virtual infrastructure.

The implementation of storage in the node is critical to successfully designing an infrastructure that cannot only be rack dense but also virtual machine dense. The storage that each VM needs is local to the physical node on which it runs. Each node’s storage configuration includes a PCIe Solid State Device (SSD) and physical capacity. The active components of the VM are moved to the SSD. The SSD can also be used to augment RAM, overcoming the RAM capacity issue mentioned above. The inactive components of VM data are stored only on disk for maximum value.

Finally the PCIe SSD is also another reason why the form factor is so dense. Nutanix for example delivers 25K IOPS in a single 2U, that otherwise would take two racks worth of mechanical hard disk drives.

Each VM’s data is then dispersed logically across the other nodes in the cluster. This enables both VM high availability and VM migration. But the need for a separate shared storage system is eliminated.

The combination of the above attributes allows for a physically smaller chassis to provide all the resources that a VM needs while offering better performance. Not only is rack density significantly increased, so is virtual machine’s density. A virtual infrastructure created from traditional hardware would require two rack columns to meet what Nutanix can accomplish in one-half of one rack.

Nutanix is a client of Storage Switzerland

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,236 other followers

Blog Stats
  • 1,553,973 views
%d bloggers like this: