Solve Your Storage Headache – Unlimited Scale but Controlled – Qumulo Core 2.6 Briefing Note

Then and Now

Over the last decade, organizations have experienced massive changes in the data center and the storage industry. Workloads found in modern data centers today are vastly different from what was common back in the days of dedicated servers with a one-to-one relationship with storage. Storage either consisted of individual physical hard drives in the servers or grouped together in RAID arrays and network attached storage (NAS) devices. However, ever-growing amounts of data combined with the major technological advances of virtualization, flash memory storage and cloud storage (object storage), radically transformed the data center into what it is today.

Modern data centers now run many virtual machines (VM) within a single physical server and all of those VMs normally share the same storage in a many-to-one relationship. Large databases can easily span the largest disk drives while big data applications like social media or sensor monitoring from the Internet of Things (IoT) devices, ingest tremendous quantities of data. Now, data centers often must store data indefinitely to comply with various government rules and regulations, as well as corporate policies. In the past, organizations were primarily concerned with managing increasing storage demands. But now they must deal with storing, protecting and managing ever-increasing amounts of data.

Storing and Managing Ever-increasing Data Quantities

As ever-increasing quantities of data overwhelmed existing scale-up NAS solutions and file systems, vendors began to offer scale-out NAS systems with large capacity file systems to meet this new challenge. But, these new systems resulted in new problems such as trying to easily determine how much space was consumed by files on the file system or determining which user was consuming the most space. These early systems were storage aware but not did not provide real-time actionable insight and intelligence about the data footprint. Legacy scale-out storage systems were built to solve the data challenges that existed 15 years ago or more. Additionally, they were not optimized to accommodate the performance benefits of flash storage and economics of spinning disk in hybrid architectures.

We are now seeing new scale-out storage and file systems that are data aware and designed to leverage flash storage performance, while also capable of storing almost unlimited amounts of data. One good example of this type of advanced file system is from Qumulo. As we discussed in a previous article, “Raising the Bar for Data Aware Scale-out NAS – Qumulo Briefing Note”, Qumulo created a new product called Qumulo Core, which is a software defined, intelligent, scale-out file and object storage solution that runs on commodity hardware. At the heart of the Qumulo Core is its file system, QSFS (Qumulo Scalable File System) which is designed by the expert team that invented Isilon scale-out NAS.

New for Qumulo Core 2.6 High Density Platform and Machine Intelligent Quotas

The latest Qumulo Core 2.6 release is machine intelligent scale-out storage that scales to hundreds of petabytes and tens of billions of files or objects. It is massively scalable, flexible and infrastructure and hardware agnostic. It is fully programmable and handles both file and object storage whether on premises, in the cloud or a hybrid model.

A new feature in the Qumulo Core 2.6 release is machine intelligent quotas. This provides the following benefits:.

  • Native Quotas – Native quotas were built into the file system and are always in-sync and up-to-date. They give administrators the ability to move data as needed while dramatically reducing storage administration time.
  • Intelligent Quotas – Every quota in Qumulo Core is a user-defined policy that executes a set of real-time queries. They provide real-time diagnosis and enforcement of rogue applications and users. This also enables more informed provisioning decisions.

Qumulo also added a new model to their existing line of commodity hardware appliances. The new appliance is the QC360 – High Density Scale-out Storage for Web-Scale IT. It provides maximum cooling and density efficiency, while providing Tier-1 storage performance. The 4U appliance provides up to 3 PB usable storage per rack (10 nodes) with 10GB/s throughput. It also supports NFSv3, SMBv2.1 and REST protocols.

StorageSwiss Take

With its real-time analytics and numerous other features, Qumulo Core makes managing and storing vast amounts of data a much simpler proposition. The real-time throughput analytics provide administrators with an accurate, in-depth view of data and throughput load distribution across the entire file system. This provides administrators with greater insight into data usage and performance enabling them to see exactly how their storage is being used whether it is on-premises, in the cloud or in a hybrid model. The Qumulo Core 2.6 update addresses two key enterprise concerns, first the need to scale further than ever and second the need to manage who is consuming that scale.

Joseph is an Analyst with Storage Switzerland and an IT veteran with over 35 years of experience in the high tech industries. He has held senior technical positions with several OEMs and VARs; providing technical pre and post sales support as well as designing, implementing and supporting backup, recovery and data protection / encryption solutions along with providing Disaster Recovery planning and testing and data loss risk assessment in distributed computing environments on Unix and Windows platforms.

Tagged with: , , , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,661 other followers

Blog Stats
  • 962,377 views
%d bloggers like this: