Do You Still Need a Dedicated Backup Server?

For over a decade the classic data protection architecture included a server whose sole purpose was to receive data from endpoints. This server was responsible for pulling data from the endpoints or receiving data from the endpoints. It might also perform deduplication, compression as well as update file and media databases. All of these functions made dedicating a server to the task a best practice. But is this ten-year-old way of doing things still a best practice?

Many things have changed over the last decade. A data center needed a high-end system to manage all the responsibilities placed on the backup server. Also because the compute power was so limited, applications were dedicated to a single server as well to make sure that they got the performance they needed. Now, though, most mid-range servers provide more than enough horsepower to drive the backup process and there is so much compute power available to run applications that we can now, thanks to virtualization, stack multiple applications on each of them.

A dedicated backup server came with disadvantages, too. First, the organization had to purchase a high-end server for the task of backing up data, which in most environments only occurred once-per-day. Second, the backup server became a bottleneck. While dozens, if not hundreds, of systems could simultaneously send data to it, all that data had to consolidate into a single system. It was not unusual for the backup server to get overwhelmed from both a network and a compute perspective.

Another challenge with a dedicated backup server is scale. What does the organization do if the server runs out of network or compute resources? It has to add a bigger, more powerful server and deploy media servers, which adds further expense and complexity to an already complex architecture.

Direct Backup

It may be time for those looking to modernize their data center to consider another option, direct backup to the cloud. Direct backup means that the physical server or even virtual machine sends data directly to a cloud based backup repository. Direct to cloud backup eliminates the concerns over scaling compute and network. These resources essentially scale every time a new server is added.

The concern with direct backup is the potential impact on application performance, but in the modern data center, where compute resources are plentiful, running out of computing power is far less of a concern than it has been in years past. Another concern is management, how are the backups of all of these individual components managed and how is ownership of the protected data consolidated.

Addressing these issues requires a new, cloud software architecture, that can centralize the management thousands of endpoints and then consolidate them into a single storage repository. The cloud becomes an ideal candidate to host such an operation. Endpoints can perform their own deduplication and compression, efficiently send the net-new data segments directly to the cloud. The cloud-hosted software essentially becomes an orchestration and management engine for the various endpoints it is protecting. It also should provide global deduplication to control cloud storage costs.

StorageSwiss Take

In light of the amount of data being protected, the expectation of users of that protected data and the available compute to drive the process, the classic backup architecture with a dedicated backup server needs to change. It can no longer be a single dedicated server, and backup software needs to become more distributed. One way to achieve this goal is direct backup, where the application servers send data directly to a backup device or target.

In this ChalkTalk Video, Storage Switzerland and Druva discuss legacy backup architectures and how they need to change in the cloud era.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,775 other followers

Blog Stats
%d bloggers like this: