When looking for a VMware alternative, you need to move beyond low prices and look for a solution to minimize VMware TCO triggers. VMware Total Cost of Ownership (TCO) triggers are challenges or needs that come up during the use of VMware that force you to do something that increases the cost of ownership. IT planners rarely refactor these costs into the ROI calculation, but they should.
In a recent article, I discussed how organizations could accelerate the return on investment (ROI) when selecting a VMware alternative. A mistake many IT professionals make is that they assume that the TCO is set in stone and mirrors the ROI.
VMware ROI vs. TCO
An ROI is based on how quickly the new product can reduce costs and is mainly based on the upfront price and how many years of service you have left on the current product. ROIs are the easiest to determine if you buy a new product when the old license expires. However, even if you have a year or two left on the old product license, if the new product is 50% less expensive, you’d see a return on buying the new product relatively quickly.
A TCO is based on how much it costs to maintain and upgrade the product to keep pace with the organization’s demands. The more you can do with the initial investment, and the less frequently you must upgrade that product, the lower your TCO. There is also an operational aspect to calculating a TCO, how many administrators the infrastructure requires to operate and how quickly they can respond to requests for services or provision new applications.
Any VMware Exit should look at both the ROI and the TCO of making the switch.
Understanding the VMware TCO Triggers
A TCO trigger is a situation that causes the organization to spend more money to solve a problem or meet a challenge that VMware can’t solve with the existing software licensing. The table below lists some of the triggers that can cause customers to spend money to resolve, thereby increasing the VMware TCO. Most VMware TCO triggers tie back to a lack of scale, something Verge CEO Yan Ness and I discuss during our on-demand webinar “How to Eliminate the Data Center Scale Problem.”
|VMware TCO Trigger||Typical Response||Impact|
|VM per Server limitations||Buy new servers||Non-budgeted Cost|
|Increase I/O Demand||Buy another/replace SAN/NAS||Adds Cost, data migration process|
|New Server Type Required||Create new VMware Cluster||Increase license cost and complexity|
|Virtualize Bare Metal Workload||Don’t||Silo of Servers|
|Increase capacity demand||Buy another/replace SAN/NAS||Increases silos or clusters|
VM Per Server Triggers
The more virtual machines (VM) you can host on a single server, the fewer servers you will purchase, reducing one of the more expensive line items in the data center while also lowering power and cooling costs helping the organization to meet sustainability goals.
Today’s servers certainly have the raw processing power to support significantly more VMs per server than what is common. The primary reasons for not increasing virtual machine density to its full potential are concerns over sudden spikes in processing demand and storage I/O requirements.
Solving for VM Density and I/O Demand
The typical response to meeting a demand for VM density is not to increase VM density but to buy more of the same server type and grow the VMware cluster, keeping the VM-to-server ratio the same. Some organizations will buy more powerful servers to handle the increase in processor load and try to keep server count from escalating out of control. These new servers, however, need to go into a new cluster. The organization also may need to purchase a more robust storage device like an all-flash array if the burden from increasing VM count is causing storage I/O problems. These potential solutions significantly increase capital and operational expenses because the new server cluster or storage systems require additional management.
A better way to solve issues with VM density is with a more efficient hypervisor that dramatically lowers the virtualization overhead lowers overhead so that more workloads can run on the existing hardware. For example, VMs on VergeOS typically perform more than 25% better than the same VM on VMware. After replacing VMware with VergeOS, customers have enough extra computing resources to delay planned server purchases for two to three years. In this case, VergeOS lowers the TCO.
VergeOS also addresses the storage challenges that a highly dense VM configuration will create. The software’s ultraconverged infrastructure (UCI) design means that customers can add a tier of SAS or NVMe flash to their nodes at a fraction of the cost of a new all-flash array without creating another storage silo.
Even if the current nodes won’t support a higher-performing technology like NVMe or don’t have space to install the additional drives, customers can implement a group of nodes specifically for high-performance I/O. VergeOS can support groups of nodes with different characteristics, including processor and storage media types.
New Server Type Triggers
In any organization, there will come a time when they need to implement a new type of server into their VMware environment. In some cases, if it is similar enough, integrating servers with the newest Intel processor with older Intel processors, for example, the customer can extend the existing environment. However, if the customer wants to add AMD processors or NVIDIA GPUs to the VMware environment, they need to create a new VMware cluster which IT must manage separately. That new cluster will need a dedicated partition from a shared SAN/NAS. If the customer is using vSAN, they must create a new vSAN instance for that cluster and pay for separate licenses.
New servers are a fact of data center life, and adding them to existing infrastructure should avoid such angst. With VergeOS, you can support servers with different manufacturer’s processors like AMD, Intel, and even NVIDIA. You can also support servers that are mostly storage. As discussed above, you can add nodes with NVMe flash drives, or to meet capacity demand, you can add primarily hard-drive-based nodes for cheap and deep capacity.
VergeOS groups these different nodes by type into clusters. However, unlike a VMware cluster, the resources within VergeOS are universally available to all VMs. Also, storage features like global inline deduplication work across the clusters. However, if the customer chooses resources with a cluster can be isolated to a particular group of virtual machines via its Virtual Data Center (VDC) technology. VDCs enable IT to align resources by workload, line of business, or individual customer.
More Than ROI
While VergeOS provides an excellent and rapid ROI, its ability to also lower the cost of ownership is critical to lowering long-term data center costs. It provides:
- Greater VM Density using existing hardware
- Meets all storage I/O and capacity demands at a fraction of the cost of a new SAN/NAS
- Supports a variety of different servers for maximum workload flexibility and longevity
- Low virtualization overhead to consolidate bare metal workloads
To learn more, schedule a 20-minute technical whiteboard session with me, and we can get into the details of how VergeOS works.
Leave a Reply