Virtualization is Critical to the Always-On Data Center

The concept of an always-on application has been a reality for years thanks to clustered applications and add-on High Availability (HA) software, but meeting this expectation can be expensive and complicated. Expand the scope beyond a single application, to an entire data center, and it seems like always-on is an impossibility. However as we discuss in our on demand webinar, “Making the Always-On Data Center a Reality”, server virtualization combined with the right data protection hardware and software can significantly bring down the cost and complexity of the always-on data center.

Step 1 – Virtualize Everything

The first step in establishing the always-on data center is to virtualize everything. In the past the concern with 100% virtualization was maintaining consistent performance of mission critical workloads. Thanks to all-flash arrays and hybrid storage systems that provide quality of service (QoS) functionality, mission critical workloads can now be virtualized without concern over performance consistency issues. The advancements in primary storage opens the door to 100% virtualization.

A high virtualization rate opens the door to advanced data protection applications and backup storage hardware, which can leverage the abstracted nature of virtualization to make applications more nimble. The ability to move an application from one server to another is a simple example of this. But virtualization also has enabled data protection applications to interface with hypervisors to take frequent, low impact backups. The more backups taken the easier it is to meet a strict recovery point objective.

To meet stricter recovery time objectives (RTO) these backups can then be instantiated as datastores on the backup appliance, thanks to recovery in place technology, also a benefit of virtualization. Mission critical applications, which have the strictest RPO/RTO extractions, can replicate this data to a secondary storage array for even more rapid recovery at full performance. Either type of protection can be replicated off-site to protect from a site disaster.

To Learn More About RPO and RTO read: Backup Basics: What do SLO, RPO, RTO, VRO and GRO Mean?

Removing the Roadblocks to Always-on

There are three primary roadblocks to reaching the always-on data center. The first is an application failure or data corruption. These failures are overcome by leveraging the backup software and backup appliance to rollback to an older version of the data set. Using recovery in place, the application’s datastore is started on the appliance, saving time to transfer data back into production. The second is a server failure, in this case the VM is restarted on another physical server and then it is pointed to either the replicated storage pool or, again, the backup appliance. The third is a failure of the primary storage system. If this fails the VM can also again be pointed to either the replicated copy of data or the backup copy of data on the backup appliance.


The new capabilities of virtualized servers creates a new set of expectations from the data protection software and hardware. The software needs to be able to fully exploit what virtualization provides, and the hardware needs to be prepared to act as production storage when it is hosting a VM’s datastore. In our webinar, we are joined by experts from Veeam and ExaGrid to discuss how to design an always-on data center and how to make sure that the software and hardware selected can meet those demands, all while staying under budget.

Click To Register

Watch On Demand

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: