The Post-Virtualization Refresh: Is Hyperconvergence the Answer?

Hyperconvergence has seen significant uptake in the last two years, driven by its simplification of IT infrastructure. It reduces the need to manage discrete devices or have specialized training in component-level technology, such as storage area networks (SANs); makes operations more efficient; and lowers costs. Any organization that is in or is approaching a technology refresh in the data center should include hyperconverged infrastructure in its evaluation process.

One of the drivers behind the increased interest in hyperconverged architectures is that data centers are reinventing themselves to become more responsive to the needs of the business. While data center reinvention should also be driving up revenues in the storage industry, it is, instead, in decline. For the third quarter of 2015, IDC claims that four of the five largest storage vendors all show year-to-year declining revenue: EMC -8%, Dell -2%, NetApp -13%, and IBM -32%. The decline is not just storage. WAN acceleration heavyweight Riverbed is also in decline and is trying to divest itself of non-essential components of its portfolio.

The Resources Impact

A significant shift in IT infrastructure occurred in the mid- to late-90s. Organizations moved away from proprietary operating systems running on proprietary hardware, in favor of operating systems like Linux and Windows that worked across various server hardware solutions. Almost unlimited IT resources powered this transition thanks to the simultaneous rise of the Internet. In this era, the data center was reinventing itself to embrace a globally connected reality.

After the Internet bubble burst, the phrase “do more with less” became the IT mantra. But now, in 2016, IT is more often dealing with a climate of “do more with nothing.” Despite the increasing importance of IT to drive revenue objectives and provide competitive differentiation, IT budgets remain flat or incur only modest increases. IT also finds itself short-staffed—the result of a combination of limited budgets and the lack of qualified personnel. The absence of these key resources means that data center reinvention has to be accomplished with less than what’s available to maintain current systems. IT planners are looking to the cloud to help them achieve these competing objectives.

Cloud Impact

The cloud is the key for organizations looking to reinvent IT, despite the reality of constrained resources. While some organizations will move much of their operations to the cloud, most are taking cues from cloud providers and designing cloud-like architectures within their data centers. Cloud initiatives seem to be the source of the storage industry’s decline, as well as the source of vendor confusion in how to respond. For example, HP and IBM are becoming smaller. HP split itself into two companies, and IBM divested itself of its server business to Lenovo. At the same time, other vendors like Dell and Oracle seek to offer the entire solution stack in an effort to simplify and accelerate data center reinvention.

Most organizations are looking for data centers that are in between these two extremes. They want the speed and simplicity of a turnkey solution, but with the flexibility and cost-effectiveness of a build-it-yourself approach.

The Post-Virtualization Era Needs Post-Virtualization Infrastructure

The latter part of this century was a pivotal time for IT. The year 2016 marks the beginning of the post-virtualization era. Most data centers are well over 50% virtualized and have held to a “virtualize first” philosophy for years. The problem is that virtualization is occurring on infrastructure that was designed for a pre-virtualization world. A significant technology refresh is, therefore, in order.

A post-virtualization infrastructure needs to fully take advantage of virtualization. Intel continues to make more powerful CPUs with greater and greater numbers of cores. Those cores need to be leveraged not only by the virtual machines running on them, but also by traditional legacy infrastructure, such as storage, storage networking, backup, replication, and deduplication. The post-virtualization infrastructure also needs to drive out complexity and high cost.

The Problem With This Technology Refresh Cycle

Technology refresh waves are nothing new. The move from mainframe to open systems, the move from open systems to less proprietary operating systems, and the initial wave of virtualization are examples of refresh and disruption. The post-virtualization technology refresh is unique:

First, it has to occur on a relatively flat IT budget, with limited IT personnel resources.

Second, the technology refresh needs to happen while IT is in motion. During previous technology refreshes, IT was granted an operational pause because the organization was not as reliant on IT as it is today. Now, many organizations don’t just count on IT, their very product is IT or IT drives the production of the product. In any case, if IT stops so does the organization.

Third, the technology refresh is occurring for more than just one aspect of the data center. Storage, servers, backup, WAN acceleration, and cloud connectivity all need refreshing at the same time—again, while IT is in motion.

In truth, this technology refresh is more of a technology overhaul. In the past, refreshing each of these technologies was siloed, which, while more orderly, left IT in a perpetual state of an update. It also leads to IT over buying each of these technologies to ensure that the purchase would satisfy requirements until the next refresh.

The data center has a problem. IT has to perform another significant technology refresh to meet higher-than-ever expectations with a limited budget. The data center also needs to prepare for a future where the organization is so dependent on IT that it won’t allow for a future refresh. This technology refresh needs to eliminate refreshes and, instead, allow for gradual but continuous upgrades to technology.

Enter Hyperconvergence

An ideal candidate for this new infrastructure is hyperconvergence. A hyperconverged architecture integrates storage and storage networking into the hypervisor, leveraging available compute to power everything. The result is a much flatter design that is less siloed and easier to manage. The hyperconverged architecture solution clusters a group of physical servers and aggregates compute and storage across them, making it an excellent solution for the on-demand reality of the modern data center.

Hyperconverged architectures are scale-out by their nature, allowing an IT planner to purchase capacity and performance to meet today’s needs, and then seamlessly add nodes as the business dictates. Each new node represents the latest technology innovations in CPU, storage networking, and storage, gradually modernizing the architecture. As nodes age, they are retired and removed from the cluster.

Hyperconvergence also enables vendors to innovate within the architecture. For example, features like deduplication and compression can improve storage efficiency while support for memory bus flash can reduce latency. Additionally, hyperconverged infrastructure vendors can integrate capabilities that have typically been add-on purchases, like backup or replication.

The Hyperconverged Challenge

The challenge with hyperconvergence is that it goes against the status quo, the three-tier architecture (server, storage, storage networking) that has been in place for over 30 years. Hyperconverged architecture represents a new frontier in data center design, and as with anything new, there is risk in making the transition. But there is also a danger of being left behind. The foundation of hyperconverged architecture is vetted at an unprecedented level thanks to organizations like Amazon, Google, and Microsoft. An organization that ignores hyperconverged architectures and stays with the status quo risks not being able to meet the demands placed on it by the organization, nor the realities of a tight budget and limited personnel resources.

Stepping toward Hyperconvergence

The good news for organizations that are willing to go against the status quo and embrace hyperconvergence is that vendors like SimpliVity don’t require an organization jump in with both feet and throw out their existing investment in servers, networking, and storage. Instead, IT planners can implement a hyperconverged architecture to solve a specific pain point, either a new project or one that is in need of a refresh of either the servers or storage. The hyperconverged solution can solve that pressing problem, as well as provide storage resources to existing infrastructure.

As confidence in the hyperconverged investment increases and additional storage or server refreshes occur, these applications can be easily migrated to the hyperconverged architecture. If the architecture needs more compute or storage capacity, another node is added to support the transition. The addition of nodes to handle the migration of new environments into the design highlights the key advantage of hyperconvergence: flexibility.

Conclusion

The data center faces its most pivotal technology refresh in its history. The current architecture was designed before the introduction of virtualization, but virtualization has been adopted at an unprecedented pace. The post-virtualization refresh, in addition to the typical challenges with a technology refresh, also has to be performed with limited disruption, despite a shortage of staff and a flat budget. Hyperconverged architecture’s answer is to redesign the architecture around virtualization instead of trying to “shoehorn” virtualization into a legacy design.

Sponsored by SimpliVity

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,703 other followers

Blog Stats
  • 1,018,826 views
%d bloggers like this: