The time has come. Flash should be a part of an organization’s data protection architecture. The rapid recovery features available in most data protection software all claim in some form to be able to instantiate an application’s volume on the backup device and enable the application to resume operations very quickly. These rapid recovery techniques bypass the need to copy all of the application’s data across the network and greatly expand the usefulness of traditional backup’s role in providing high availability.
The problem is that the primary data protection target that most organizations use is a high capacity disk backup appliance that is storing data after it has been deduplicated and compressed. A high capacity drive storage system will have fewer drives than a low capacity drive system. Less drives means, fewer controllers and lower performance. Deduplication and compression means that the data must be rehydrated prior to use. The problem is that most backup appliances today use inline deduplication and so in a boot from backup device scenario are constantly hydrating and deduplicating data as it is read and re-rewritten. The result is performance so bad that the application owners likely will want to wait for an old-fashioned, across the wire, restore instead of suffering through the poor performance of a “rapid” recovery.
The fix for this poor performance is to have an area separate from where rapid recoveries occur; either a hard disk based array that is not using deduplication and compression or a flash array. A flash array may make more sense because it could continue to use deduplication and compression with minimal application performance impact. It also, more than likely, better matches the day-to-day performance of the primary storage that supports the application. Many data centers are well on their way to using flash as the primary storage media for production storage.
To keep costs down the organization should aggressively move data from the backup flash devices to a low cost object storage system, cloud storage or maybe even a tape library. An organization may want to use a combination of these options. The key is the data protection software has to support the various tier types and do so intelligently. Ideally, on the flash tier the organization only wants backup data from its most critical systems and only the data needed to return the application to how it looked after the most recent backup. All other backup data, potentially more than 90% of the backup should go to the aforementioned object store, cloud storage or tape library.
Storage Switzerland is discussing the need for backup architecture change several times this week. First, you can watch our on demand webinar “Three Steps to Modernizing Backup Storage” here.
Second, if you’re in the Dallas, Fort Worth area, you can join us live at Storage Technology Showcase. The inaugural event is in Grapevine TX in the Dallas Fort Worth Area on March 12-15th. Each day is packed, and I mean packed, with educational presentations given by industry analysts (including yours truly). Each day focuses on a specific practice that IT Professionals need to know. Day one focuses on Mass Storage – Cloud, Object Storage and Tape. Day two is all about High Performance Storage — Disk, SSD and Flash. Day three is all about the future — metadata, bandwidth and analytics.
The Dallas-Fort Worth Area is one of the largest in the US and has thousands of organizations yet it is often completely ignored for these types of events. It is important that the DFW IT community supports this event, not only to get great information and insight but also to make sure more events like this come to the area!
The cost of the conference is $680 which includes complete access to the conference, including 23 sessions as well as breakfast and dinner. The first ten Storage Switzerland readers get 10% off by using 2019DISCOUNTS discount code during registration.