Some corporate financial planners are pointing towards the use of hybrid cloud solutions as a way to reduce storage costs. The idea is to move redundant copies of information off expensive onsite primary storage to lower cost storage in the cloud. But can these savings be a smokescreen that ends up hiding a much larger problem?
Cloud Data Fragmentation
Public cloud storage can reduce costs by eliminating the need to purchase and maintain dedicated infrastructure resources to host backup, archival and DR data. However, unless there is a way to sift through all of this information so that it can be readily searched and quickly retrieved, then any up-front savings realized by pushing this data into the cloud, could pale in comparison to the man hours spent trying to find this information when it is really needed.
Take data recovery as one example. For years, many businesses have relied upon storage snapshot technology to allow them to quickly revert to a point-in-time copy of a given application’s data. The cloud makes this technology even more powerful by moving snapshot data off-site where it is not dependent on the original arrays for data access. But what if local snapshot information gets corrupted and the only place to recover from are snapshot copies that are archived in the cloud? This may be less of an issue if there are only a few dozen snapshots to search through but what happens when there are hundreds or thousands of snapshots to parse? Which thanks to cloud storage is a distinct possibility. The time it could take to locate the right snapshot or set of snapshots could keep key systems offline, potentially impact hundreds of end-users and negatively impact business revenue.
Data Sprawl Chaos
The lack of data control could also contribute to a variety of ills that can negatively impact businesses. First, it could lead to infrastructure sprawl – both in the primary data center and in the cloud. Data snapshots, for example, could be pushed on to cloud resources, but if there is no way of centrally browsing this information to identify where there are redundant copies, cloud related costs could needlessly escalate. Secondly, without a mechanism for quickly identifying the right data sets to support application development efforts, opportunities to streamline operational workflow (dev/ops) could be impeded. And of course, business agility is largely dependent upon the ability to analyze data on-demand to facilitate better business decision-making. In short, if there is no real visibility, insight and control over business data, regardless of whether it is onsite and/or in the cloud, supporting all of the above use cases can be a non-starter.
Virtualized Data OnTap
Virtual appliance technology is making it easier to deploy compatible, software-defined storage systems across hybrid cloud environments. As an example, the NetApp OnTap operating system can now be implemented as a virtual appliance in environments like Amazon’s, to enable businesses to rapidly provision a virtual filer in the cloud. The advantage of this approach is that it gives businesses with an investment in NetApp technology, a common set of tools and processes for protecting and accessing data across hybrid cloud environments, but do so on commodity infrastructure. In addition to reducing computing and storage costs, it can increase business agility by allowing businesses to rapidly deploy application infrastructure on-demand. The problem with this approach is that data is now dispersed across multiple locations (on-premise and cloud) and Filer types (physical and virtual).
Global Snapshot Management
A key advantage of the OnTap operating system is the power of its snapshots. They can be leveraged for much more than just data protection; they can also be used for space efficient, secondary copies of data. They also have the ability to be stored on different Filer instances and locations, including the cloud, so that they are not dependent on a single Filer.
To take advantage of the above benefits, organizations should consider implementing a layer of intelligence that can aggregate all the disparate snapshot information that is dispersed across multiple filers (On premise and Cloud) and present them in a consolidated, searchable interface. By implementing a global snapshot copy data management catalogue, like Catalogic’s ECX solution for example, storage managers can quickly ascertain what information needs to be retained and archived offsite to satisfy backup, DR and corporate data governance requirements, as well as what copies should be maintained for analytics processing and Dev/Ops. Applying this intelligence would allow businesses to free up valuable primary storage space for critical business applications by migrating most of this data into the cloud.
There are significant opportunities for organizations to improve business agility and lower costs by leveraging public cloud storage resources. The increasing ubiquity of software-defined technologies, like Data OnTap, in conjunction with virtualized server infrastructure, is enabling organizations to accelerate application development cycles, enhance data protection and DR, and foster new opportunities to increase business profitability. But in the absence of a centralized data management framework that provides IT planners with greater insight, visibility and control over their data assets, these opportunities may prove to be unattainable.
Copy data management solutions, like those from Catalogic, allow storage managers to get a centralized and consolidated view of all their snapshot information across the enterprise and in the cloud. This gives application owners and IT managers a tool to quickly identify those data sets which are needed to support test/dev workflows, feed data analytic systems and bolster their DR capabilities. Furthermore, it also provides a way to identify where there are data protection redundancies so that valuable storage space can be reclaimed and re-provisioned to help improve efficiencies.
Sponsored by Catalogic Software