The Time is Now for File Virtualization

One part of File Virtualization is a global file system that abstracts the physical location of data from the logical directory structure. Even though data may move between physical file servers or network-attached storage (NAS) systems, users continue to access it through the same logical path. File Virtualization goes beyond a global file system. It enables that file system to pull storage capacity from other file servers and NAS system, creating a global pool for storing unstructured data. Finally, it adds policies to automate the movement of data to either balance loads or drive down storage costs.

The concept of file virtualization is not new, but its adoption rate is relatively low. The primary reason for the technology’s slow start is a lack of glaring need. Most organizations have learned to live with the operational overhead of dealing with multiple file servers or NAS systems, especially if those systems were from one vendor.

Unstructured Data Challenges

The state of unstructured data is changing, though, and the time is ripe for file virtualization. One of those changes is the size of the unstructured data set, which now, for most organizations, dwarfs the structured data set. Where an organization might once have had five or six NAS systems, they now might have a dozen or more from different vendors. With this many file systems to navigate, users have a hard time finding the files they need. IT faces the challenge of protecting all this data, managing so many systems, and making sure that unstructured data doesn’t consume the entire data center budget.

A second change is the number of protocols available to access it. There are, of course, legacy protocols like NFS and SMB, but now IT needs to contend with demands for a parallel network file system (pNFS) and object storage based on S3. There is also a need for mixed access. For example, an IoT device might transmit data via NFS for storage, but an analytics process may want to process that data in parallel via pNFS.

Introducing DataCore’s vFilO

DataCore’s vFilO is a distributed files and object storage virtualization solution. Today, it is a separate product from the company’s SANSymphony block storage virtualization solution, but vFilO can consume block storage from SANSymphony and present it as an SMB or NFS mount point. vFilO can consume storage from a variety of providers, including NFS or SMB file servers, most NAS systems, and S3 Object Storage systems, including S3-based public cloud providers. Once vFilO integrates these various storage systems into its environment, it presents users with a logical file system and abstracts it from the actual physical location of data.

Architecturally, vFilO is a service-driven architecture. The services run on nodes that create a vFilO cluster. There are several metadata services that manage the mapping of the logical file-system location to the physical. There are also data services to provide access to data via NFS, SMB, or S3. The services can be distributed across multiple nodes so that the file virtualization engine does not become a bottleneck to performance.

Using DataCore’s vFilO

There are two use cases for vFilO that should immediately jump out at IT professionals. The first is its capabilities as a global file system. Additionally, IT can add new NAS systems or file servers to the environment without having to remap users of the new hardware. The solution supports live migration of data between the storage systems it has assimilated.

The other obvious use case is to leverage the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically. The lower-cost target can be a high capacity NAS, but more interestingly, it can also be an object storage system. vFilO can transparently move data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.

The concept of moving old data to less expensive storage is one that has been with us for a long time. While the ROI always makes sense on the whiteboard, it doesn’t execute well. The problem is that in order to see the cost-benefit of a high capacity NAS or object store, the organization needs to buy a rather large system (100TB is a common starting point). To make ROI calculations worse, the organization already owns the storage that stores the old data. It is a hidden cost. Unless there is an object storage specific workload that the archive process can piggyback onto, the ROI of the object storage system becomes difficult to justify.

vFilO offers an additional way to establish an archiving process, cloud-based archive. Cloud capacity can be purchased incrementally, 1TB a time if needed. With vFilO, organizations don’t need to archive anything on day one. Instead, they can wait for the demand for additional storage. Instead of buying more storage as they usually do, they leverage vFilO to migrate just enough of their oldest data to the cloud to make room for the new capacity demand. By incrementally building their archive, they are delaying the cost of a significant upfront archive storage investment.

There are other use cases beyond simplifying migration and lowering storage costs. Organizations can use vFilO to balance performance across all their file servers and NAS systems. They can use it to improve resiliency within the data center and across data centers. They can also use vFilO as a data distribution system to make sure that each location has access to the data it needs.

StorageSwiss Take

The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.6K other subscribers
Blog Stats
  • 1,938,323 views