Object storage has been getting a lot of attention lately. This new way of storing data does not require the old metaphors of volumes and folders. Data is stored in a flat architecture and then each object (file) is assigned a unique ID. Object storage’s popularity is being driven by the limitations of standard file systems. Does it make sense for IT to continue down the path of conversion to object storage, or does IT just need a better file system?
The Object Storage Advantage
Object storage claims a couple of advantages over file systems. The first is unlimited object quantity. The number of items for a storage system (file or object) to store is increasing exponentially, largely driven by the Internet of Things (IoT) and Big Data Analytics processes. Cost effective and theoretically unlimited capacity is also a major claim of object storage, thanks to the system’s ability to leverage commodity hardware. Most NAS systems come bundled with hardware putting them at a cost disadvantage. Object storage systems also support much richer metadata than traditional file systems. They can be tagged with data like a GPS location or serial number of device creating the data. Finally, many object storage systems have the ability to scale across data centers or into the cloud, allowing them to meet the needs of the modern, distributed organization.
Current file system technology has gone at least a decade without a major upgrade, but most can easily support the new file count and capacity demands of the enterprise. Most also provide better performance than object storage since it is not burdened with the sophisticated metadata management. And the reality is most applications, especially converted ones accessing object storage through a gateway don’t support the rich metadata that object storage provides. Finally, a few systems are now available in a software-defined format and can run on commodity hardware similar to object storage, which enables them to hit similar price points.
Qumulo – Improving NAS instead of Replacing with Object
A modern file system, the last major file system was released over a decade ago, needs to embrace all of the above capabilities, continually improving performance, file count support, and capacity. But it also needs to become more elastic, spanning across data center and into the cloud, which is exactly what Qumulo is focusing on in its latest release.
Qumulo File Fabric (QF2) is Qumulo’s new software-defined file system and is designed to run on a variety of hardware platforms. At the heart of the system is Qumulo Core, which is designed for massive scale and can use off-the-shelf flash and hard disk storage. It is powered by the QF2 File System that provides real-time visibility into data and storage.
The solution is very visual, providing instant file-system insight without IT having to wait on slow file system walks. It also has an efficient block-based data protection methodology instead of RAID-based data protection, making it ideal for high capacity environments.
The first iterations of Qumulo Core focused on eliminating the limitations of standard scale-out architectures, improving performance, scale as well as the ability to store tens of billions of files. Making it a better alternative to existing legacy NAS architectures and potentially preventing some organizations from being forced to move to object storage.
The challenge with most scale-out file systems face is how to scale “out” of the data center. Most can only replicate to a like system, ruling out the public cloud as an option. If they support the public cloud at all it is as an archive target for old data.
Scale-out, High Performance Amazon NAS
The new release puts Qumulo on the path to a more fabric approach. The company calls this initiative the Qumulo File Fabric (QF2). In this release, it provides the capability to instantiate a Qumulo system in Amazon EC2. Unlike other offerings, which can only run on one EC2 node, it provides a true scale-out file system in the cloud running on as many EC2 nodes as required. This cloud scale-out capability means organizations can move applications to the cloud, without rewriting them for object storage and still get very high performance.
QF2 also provides continuous replication. An IT administrator can identify a particular folder or volume and have that folder continuously replicated to a Qumulo system in another location, including the Amazon Cloud. This provides a basic data distribution mechanism for Qumulo and in the future it will be extended to full synchronization.
Qumulo Trends Service
QF2 also brings a big data analytics aspect to detecting and preventing problems. Both on-premises and cloud based instances are continuously monitored in the cloud. IT can access historical trends that will help lower overall storage costs as well as optimize workflows.
One of the big challenges with any storage system selection is deciding if the system will meet the needs of the organization. Evaluation means working through a process of negotiation with the vendor, waiting for equipment to come in, implementing that equipment, doing testing, and sending the equipment back.
Qumulo enable customers to try QF2 in a VM or in the Amazon cloud. Evaluating QF2 on Amazon Web Services is not only interesting to test the capabilities of the software, it also gives the organization an opportunity to see how its application will run in the cloud.
Sometimes in IT there is a tendency to race to the shiny new thing. When it comes to unstructured data, that “new thing” is object storage. While object storage has potential, especially with object native applications that can exploit its metadata capabilities, it does face limitations when trying to use in a data center full of legacy applications. Assuming the file system can be fixed and extended, native NAS may still be the better choice for many organizations. Qumulo is one of the few to understand the need and are addressing it.