There are plenty of solutions that promise to help an organization migrate, or transform, data in the cloud era. The problem is most of these solutions are just too complicated and try to do too much. Most organizations simply want to move data from Point A to Point B. The problem is the available solutions are either too simplistic or too complicated for the task.
Unstructured, or file data, is increasingly critical to the enterprise. The problem is when this data is created it is often isolated to the system it was created on. The custodians of the data are the infrastructure team, they do not have context of the data. Data does not tend to flow through the enterprise. Even though most unstructured data after its creation quickly becomes dormant and is an ideal candidate for movement to an alternate location. The challenge is identifying and then moving this data requires careful analysis and planning. A simple drag and drop will not do the job.
Also unstructured data is no longer stored to meet some regulatory requirement, it is stored to be mined. Now or in the future. Using analytics processes organizations are interested in gaining insights or discovering new trends. But in order for these secondary analytics to occur data must be migrated to a native, future accessible store or file system.
The Data Movement Challenge
There are several data management and movement solutions on the market. The problem is these solutions often require an additional component that either bottlenecks performance or locks the customer into the migration vendor’s solution.
For example, gateways are commonplace. These system translate between one file system type and another (SMB to Object, for example). The problem is the gateway becomes a bottleneck and is a roadblock to maximum performance. File virtualization, as another example, locks the customer into a particular vendor’s metadata management engine. If the provider of the file virtualization solution goes out of business it is very difficult for the customer to move to another provider. Then there are solutions that leave stub files in the original files location. These stub files are vulnerable to all sort of problem and actually increase file count.
What Data Centers Need
Data centers need to analyze, move, manage and modernize their architectures that they use to support unstructured data. These tasks cannot be accomplished manually, there is simply too much data and too many discrete files across too many systems for humans to be able to manage them all. Instead, IT needs a platform that can scan and analyze data across traditional network mounts like NFS and SMB as well as object storage mounts and cloud storage.
After the analysis is done IT needs to be able to seamlessly move this data to alternate platforms. This movement is not solely motivated by cost, although cost savings is a big factor. In many cases organizations want to repurpose that data. For example, they may want to move data to the cloud to leverage cloud-based services like indexing for search, video/audio transcription and other big data type of functions.
Cost savings comes in at the data management part of the process. Where, thanks to the analytics component, data that is cold can be identified and then moved to a more cost correct tier. That tier may be a high-capacity NAS on-premises, an object storage system on-premises or a cloud-based object store. Again, making sure that data lands in one of these targets in its native form provide flexibility for future access and potential data repurposing.
Finally, many organizations are also looking to modernize their unstructured data infrastructure by moving data from traditional file based storage systems to native S3 Object Storage systems. Once again the key is organizations want to move these data correctly, but also independent of a third party file virtualization or gateway. Doing so allows data to land on the object store in its native format and allows organizations to leverage the advanced capabilities of an object storage like advanced metadata and custom keys.
Introducing StorageX 8.0
Data Dynamics StorageX 8.0 is a software application that enables the movement of unstructured data between storage systems in the enterprise. Its primary use was as a migration tool, simplifying the ability to move data from Vendor A’s NAS to Vendor B’s NAS. The 8.0 release expands the use case, enabling customers to analyze their existing data, make informed decisions, move data from legacy protocols like SMB, NFS to S3 Object Storage, manage their data by archiving to S3 Object Storage and to modernize their environment by moving file data to native S3 Object.
While there are a variety of solutions on the market that promise similar capabilities, they are either gateway solutions prone to bottleneck problems or file virtualization/namespace solutions that require customers to jump in with both feet and trust the vendor will remain in business for the life of their data. In both cases, the data, when it gets to the new environment, acts like it is still on the old environment, which means it can’t take advantage of the features and capabilities of that new environment.
StorageX is at the opposite end of the spectrum. It simply allows customers to move data from point A to point B and when that data gets to “point B” have that data be in the new native format so it can take full advantage of the environment’s features. This includes taking advantage of advanced metadata tagging common in S3 and object storage systems. With tagging in place the ability to perform a analytics on this data is greatly improved. Essentially as part of the migration the data is “modernized”. Finally, after the migration is done, there is no need to keep StorageX running, simply shut it down until the next time IT needs it.
The key new additions to StorageX are its improved analysis capabilities, which include a portal for access as well as the ability to apply custom task and create user defined reports on the data in the enterprise. The other big addition in 8.0 is the inclusion of S3 Object Storage into all aspects of the product including, analysis of S3 data storage, the ability to move data to and from S3 data storage and the ability transform data into a native S3 format.
Data Dynamics’ StorageX has been proven in the market for a long time. It is a well vetted solution. 8.0’s inclusion of S3, and its enhanced ability to analyze current file assets, are a welcomed addition to the product. Data Dynamic’s approach of not being in the data path, providing the organization direct, native access to its data, and having archived data continue to work for it should have high value. Organizations looking to implement new storage systems, S3 compatible object storage systems or to move data to the cloud, should seriously consider StorageX to help them with that process.
Sponsored by Data Dynamics