Briefing Note: The Cloud is Now as Fast as the SMB

Cloud-based services, like backup and disaster recovery, offer attractive alternatives for many SMBs. They’re simple to set up and operate, can be scaled easily and paid for on a monthly basis. But companies typically have to give something up, usually speed, when these data sets are sent to the cloud. Now, is showing that direct-to-cloud backups for SMBs are as fast as they need to be, which for most companies means as fast or faster than the typical cloud backup appliance.

For many backup solutions, the cloud was an added feature, adding WAN transfer times to the existing backup window. This created the need for an on-site appliance to store backups locally and manage the backend connectivity with the cloud.’s approach was to optimize the entire direct-to-cloud backup process, from the first comparison steps on the source server to the final data writes at the cloud destination.

Cloud backup at disk drive speeds

Zetta DataProtect version 4.7 was third-party tested at 975Mb/s, which is comparable to hard disk drives (actually in between a 5400 RPM SATA drive running at 800Mb/s and a 7200 RPM drive at 1075Mb/s). This means, assuming a 1Gb/s connection to the internet, backups can be sent to the cloud as fast as the data can be pulled off the server hard drive. For these tests, roughly 500GB data sets were backed up to the cloud in less time than an SMB backup appliance completed the same job over the LAN.

How they do it has been in business for a number of years as a direct-to-cloud solution designed for server backup. They’ve honed their technology to maximize not just data transfer but the entire backup process which, for most backup solutions, includes four steps.

First, the source server has to check the data set being backed up for changes. This is the determination of which data objects need to be sent to the cloud, a process that for some software applications can take a significant amount of time. actually makes this process simpler and much faster for the source server by comparing each new data object with a local cache of digital signatures that represent each file and sub-file component that’s already been backed up.

Backup process optimized for the cloud

Next, the server has to run calculations on the local CPU to conduct deduplication and compression and package those net-new data objects for transmission to the cloud. Then, they have to physically move that changed data over the internet.’s dedupe and compression are optimized for speed and they’ve developed a process to map these changed sub-file components to the complete file or dataset in the cloud. This allows the system to recognize the right file blocks, even those that are sent out of order, making the transmission process more efficient and more robust, and eliminating retransmissions.

Finally, these data blocks are written to the cloud, another process they’ve optimized for speed, because owns and designs the infrastructure, it’s not a public cloud.

These four steps were traditionally conducted in a largely sequential fashion. In addition to optimizing these individual steps, has found a way to parallelize them so that multiple steps can be conducted simultaneously, for sub-file blocks from the same files and those from different files.

StorageSwiss Take

Historically, direct backup to the cloud has been slower than local backup. This has driven the popularity of appliances that capture backups on-site and then transfer to the cloud. But with the advent of affordable gigabit internet bandwidth and technologies like, the cloud is becoming as fast as the local infrastructure it’s protecting.

This makes direct-to-cloud solutions viable alternatives for many more business environments, but provides another value as well. For companies that believe no backup is really done until the data has safely been moved off-site, direct-to-cloud systems can eliminate that extra step and get the data protection process done much faster.

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,263 other followers

Blog Stats
%d bloggers like this: