Amazon re:Invent: Cohesity Briefing – Modernizing Secondary Storage

While the industry places a lot of attention on all-flash arrays, for the most part these systems only service a small portion of data center capacity – the most active data. Secondary storage stores, or at least should store, the remaining 85 percent-plus of data. This secondary data not currently active, is copies of active data or is not performance sensitive. That definition expands the secondary storage use case significantly.

Instead of storage for backups and archives, secondary storage, with proper design, can be useful for test/dev, file shares, and as a initial ingest point for IoT devices. In fact, the more use cases IT throws at a secondary storage system the better the ROI becomes. The problem is the way those systems have been historically architected. Most secondary storage systems are designed with a backup or archive only mentality, not for the broader use cases.

Rethinking Secondary Storage

The first generation of secondary storage focused primarily on storing backups. They were typically scale-up storage systems with features like deduplication and compression. And, to their credit, they improved the backup process significantly, especially getting backup data offsite. But backup data sets grew, features like recovery in-place came to market and new use cases like copy data management were considered, all of which created challenges for these first generation solutions. The result was these appliances ran into scale issues both in terms of performance and capacity.

The next generation of secondary storage systems, like those by Cohesity, are distributed storage systems where capacity, compute and network performance are aggregated from a cluster of storage nodes. But these are NOT object storage systems, they are designed to provide robust NFS and SMB support. Cohesity integrates a complete backup solution that can protect virtual and physical systems into their architecture. It also has the performance to be leveraged for providing managed copies of data to secondary processes like test/dev, reporting or analytics.

Cloud Connectivity is Key

Cohesity can use the cloud in three ways. First, data can be replicated to the cloud, creating an off-site archive copy for long-term data retention. This is essentially using the cloud as a giant dumping ground.

Second, Cohesity can leverage the cloud as another tier of storage within its system, essentially moving blocks of data that have not been accessed in a period of time to the cloud.

The third method leverages Cohesity’s software defined storage roots and instantiates the solution into a cloud provider like Amazon. Running the same solution in the cloud allows the providers compute architecture to be leveraged for additional processing or for disaster recovery in the cloud

While the base Cohesity system provides replication and copy data management, customers still have the challenge of providing compute to these secondary copies. Now with an instance of the system in the cloud, they can use cloud compute perform test/dev, analytics or reporting. Cohesity can even use the cloud for deliver compute in the event of a data center disaster.

StorageSwiss Take

There is a lot to like about the Cohesity solution. It converges a lot of the various software and hardware that makes data management so difficult. All-flash scale-up arrays arrays will, for most data centers, eliminate the storage issues that confront mission critical applications, modernized scale-out solutions have the potential to do the same thing for secondary storage.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,767 other followers

Blog Stats
  • 1,081,715 views
%d bloggers like this: