The objective of data management is to make sure the right data is on the right type of storage at the right time. The problem is most vendors are too myopic in their treatment of the process. Vendors either focus on storage performance and move data to higher performance storage systems, or they focus on storage costs and move data to slower, more cost effective storage systems. Data centers need an end-to-end data management solution.
Where Traditional Data Management Falls Short
Traditional data management solutions, either performance-oriented or cost savings-oriented, tend to move data across a finite number of tiers and often within a small ecosystem of storage systems. An example of a performance-oriented data management system is a hybrid flash array. These arrays tend to move data from a flash tier to hard disk tier as data becomes less active. The problem is the data movement is all within the one system and it is limited to the two tiers.
An example of a cost-oriented data management system is an archive solution. These will migrate data that is not being accessed from more expensive storage like all-flash or hybrid arrays to less expensive storage. Some archive solutions will even support multiple storage back ends, like on-premises high capacity NAS and the cloud. But they will only accelerate data back to the tier of storage it was on when that data was archived. If there is a high performance option like an NVMe all-flash array or a internal flash NVMe PCIe card, the archiving solution won’t use it.
What is End-to-End Data Management?
End-to-end data management covers the full spectrum of the data lifecycle across multiple tiers and storage types. It implements a global file system that incorporates all of the data center’s known storage systems and can assimilate new storage systems as they are introduced. It can move across multiple types of storage systems, from different storage vendors, including cloud storage, based on performance or cost objectives.
Introducing Primary Data’s DataSphere 2.0
Primary Data is an end-to-end data management company. It’s DataSphere software solution is an intelligent global file system that is able to automatically move data in both based on a performance need or a cost savings opportunity, across a variety of vendors solutions. Administrators set data set specific policies or objectives based on performance and cost considerations. Data then automatically moves between storage types and tiers as the system tries to reach a homeostatic state for the organization’s data.
Primary Data has announced version 2.0 of its DataSphere software. This release features Objective Expressions, which allow a more finer grained control over how, when and where data is acted on. Most data management solutions are solely focused on the access date as the trigger for data movement. But in version 2.0, Primary Data includes the ability to act on metadata attributes such as file location, file type, file size, access time, modification time, and last open time. Each of the attributes can be expressed in policies the system automatically executes as data reaches certain criteria.
If they leverage it at all, most data management solutions support only a single cloud and it is the last step in the data management process. DataSphere version 2.0 enables the support for multiple clouds and locations. Data of certain types can be stored in one cloud, and data of another type can be stored in an alternate cloud – transparently to the application.
This capability also means data stored that needs to be stored in specific “in-country” clouds can be managed by policy to respect data sovereignty laws. 2.0 also improves DataSphere’s snapshot capability to move or copy data to the cloud. That protects data without impacting enterprise capacity.
Windows is now fully supported on DataSphere 2.0. It adds support for SMB 2.1 and 3.1, as well as Active Directory. DataSphere also supports shares used by both Windows and NFS (Linux) clients to ensure that security and file permissions are handled correctly across both NFS and SMB.
2.0 enables rapid adoption of DataSphere without interruption. The new release enables the assimilation of NFS storage system’s metadata without requiring downtime so data can seamlessly be placed into the DataSphere Global Namespace without the need to copy data. Once assimilated, IT can set policies against this data and then 2.0 can move data as appropriate.
Finally 2.0 brings better visualization of data across the environment. The user interface is enhanced to provide a new look and feel. It enables the creation of new objectives based on existing objectives. Objectives also can link to various parts of the UI for faster navigation. Finally, data mover statistics provide insight on data activity in the cloud and a new dashboard provides complete metadata details about data being stored.
Data management is no longer a “nice to have.” The demand for increased performance, the rampant growth of unstructured data and the number of options available to an organization to store data requires the enterprise takes a new approach to data management.
An end-to-end data management solution like Primary Data’s DataSphere 2.0 enables and automates intelligent end-to-end data management. With the solution in place, data centers can eliminate manual data migration projects, automate the implementation of new storage resources, create a pathway to the cloud and optimize both performance and storage costs.