Technically, an organization can protect a new all-flash array the same way it protects its current hard drive based array – use backup software to access the hypervisor, have it trigger a snapshot and then backup all the blocks that have changed since the last backup, then have that backup software copy all the data to a hard disk based backup appliance. While this approach is a massive improvement over how we used to do backup, it really is not taking advantage of the all-flash array.
IT also needs to recognize users are constantly increasing their expectations and all-flash raises those expectations even further. Once users become accustomed to all-flash performance, they will be less tolerant of poor recovery performance.
Most organizations are attracted to all-flash because of its ability to solve the storage performance problems applications experience. But all-flash can also contribute to improving the data protection process. At its most basic level, a backup is a copy of production data. Since most storage systems leverage snapshots to create a copy of data, flash systems can create snapshots faster and are less impacted when they have to manage hundreds of them.
Those flash arrays do need to interface with the hypervisor in order to get a “clean” snapshot. Many don’t have that capability and users have to cobble together scripts to get the job done. Make sure your all-flash array can interface directly with the hypervisor so the snapshots it takes are “application aware.” My colleague W Curtis Preston recently wrote a blog detailing the importance of application-aware snapshots and how they should work.
No storage system is perfect, it can fail or someone can hack it. So to keep data safe, organizations should copy their snapshots to a separate device as soon as possible and off-site soon after. Ideally the organization should deploy both strategies, replicate snapshots to a remote site and make backups of critical data to a secondary device. Typically a replication to a remote site requires a similar configured storage system so this protection is reserved for only the most mission-critical of workloads.
Most of the workloads in the environment will be protected to and recovered from a secondary storage device. While these workloads may not be critical enough to justify replication to a similar unit in a remote location, they often are important enough that their protection and recovery has to occur quickly. It may seem though that the organization is limited to protecting these systems the “old” way to keep costs down, which makes it difficult for IT to meet users higher recovery expectations.
What IT needs is a way for the disk backup solution to directly interface with the all-flash array in order to send changes to data more efficiently. Essentially the disk backup appliance becomes a very low cost replication target for the all-flash array. Instead of having to “replicate” to another all-flash array it can copy-changed data to the disk backup system. Then if, as part of the recovery process, the disk backup appliance also has the ability to present the data it stores directly to the virtual environment, users will experience much faster recovery times.
Understanding the ramifications on the data protection is just one element of the all-flash after-effect. In our on demand webinar we also cover how the network has to evolve and how the system has to gather real-time analytics to solve problems before they occur as well as dive deeper into how to protect an all-flash array from a disaster.