Data Protection Versus Hyperconverged Data Protection

As applications’ data protection requirements become more unique, many enterprises are considering a hyperconverged data protection solution so they can “just drop in” a node to meet expansion needs. While at the surface this sounds like an agile and simple approach, it consumes valuable data center budgets and floorspace, and does not provide for optimal resource utilization. It also does not provide isolation between workloads, which is sometimes a requirement. A scalable, container-driven approach to data protection may be a more effective answer.

Overview of Modern Data Protection Requirements

Sprawl of data protection has become a real problem for many enterprises. All applications require some form of data protection and data retention, but each application has unique requirements. As a result, business units within the enterprise are oftentimes procuring their own data protection services and solutions to ensure that their specific application’s requirements are met. A similar problem can occur as new business units are spun up, as the enterprise acquires other businesses, and as business requirements inevitably change. At the outset, it seems easy to simply add new appliances, because it avoids the need to change existing implementations. Very quickly, however, this results in inefficiencies in terms of redundant or underutilized IT resources (this is a significant concern considering the fact that data center floorspace is at a high premium today). A splintered data protection environment is also challenging to manage, and it becomes even more complex at scale. With the addition of cloud data protection services to the enterprise as well, this problem is further exacerbated.

What is Hyperconverged Data Protection?

A slew of data protection offerings based on hyperconverged infrastructure (HCI) have hit the market. HCI virtualizes compute and storage, and it enables these resources to be centrally managed through a singular management platform and deployed on an industry-standard server. HCI initially got its start in production workloads such as virtual desktop infrastructure (VDI), largely on the back of its ability to simplify deployment and management of infrastructure resources. There is also a perception that the platforms are easy to scale, because nodes are pre-configured and can be easily deployed into an existing cluster. Additionally, the enterprise may gradually migrate portions of their environment to HCI.

These characteristics are garnering attention for HCI in the context of data protection. This is because enterprises are tired of the hassle and expense that comes with planning for and managing a data protection infrastructure that, at the end of the day, does not directly support business activities which directly support revenue. One key problem with a hyperconverged-led approach to data protection at enterprise scale, however, is that it does not eliminate silos of infrastructure. Multiple disparate systems still need to be purchased and managed. Additionally, when adding a node, IT still needs to procure rack space, and deploy and configure the system. This takes time and does not eliminate the chance for human error. In addition to this hassle, infrastructure resources cannot be shared between HCI clusters that typically exist within departments or locations, so infrastructure resources (hardware and software alike) are typically still underutilized.

What is Containerized Data Protection?

An alternative that enterprises should consider using is containerized data protection to meet application or department-specific data protection needs. Containerized data protection also consolidates the process into a more efficient data center footprint that is more fully utilized and easier to manage. Containers fully abstract the application from infrastructure resources, and they contain all of the components, including files and dependencies, required to run the application. This approach is different from virtualization, which pools and abstracts infrastructure (hardware) resources that are then presented to the application. Virtualization can improve hardware resource utilization compared to a bare metal implementation because it can run multiple workloads on the same system. However, it still carries high infrastructure overhead and is less flexible than containers. Each VM runs and is tied to an operating system (OS), whereas multiple containers will share access to the host OS in addition to physical resources like CPUs and storage capacity. As a result, containers can be more quickly spun up and down, they are highly portable, and they enable physical resources to be assigned in a more granular and application-specific nature. Multiple specific containerized applications can coexist on the same infrastructure with a high degree of hardware utilization.

In a containerized data protection approach, IT may create a menu of data protection services dedicated to a specific application, department or office. IT can configure the services according to that department or office’s specific requirements. Because they run in containers, these services may be consolidated onto the same infrastructure. All of this translates into streamlined management and better utilization of hardware resources. At the same time, the end user experience does not change; departments or offices may still select and implement a specific suite of data protection services.

Another component of the containerized data protection value proposition is greater agility and faster time-to-market. Data protection services can be quickly deployed or spun down as requirements change, and the time it takes to complete the first time can be dramatically cut. Meanwhile, updates and upgrades can be rolled out incrementally, as they are tested and verified, and when each department or office is ready.

Conclusion: Comparing Hyperconverged to Containerized Data Protection

For large enterprises seeking to accommodate varied data protection at sprawl without massively increasing their capex (infrastructure) or opex (IT staff) budgets, a containerized approach is likely to make a lot of sense. Containerized applications are fully isolated and may be dedicated to specific departments or offices. At the same time, they are agile and lightweight. They better position IT to respond to specific data protection requirements faster, and with a finer degree of granularity. Meanwhile, backups and other services can be brought online very quickly. Simply put, they position IT to do in minutes what used to take days. Finally, but far from least importantly, they require the purchase and deployment of less infrastructure and enable consolidation of a range of services for various departments on the same systems.

Sponsored by Veritas

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: