Legacy network attached storage (NAS) architectures, simply put, fall short when it comes to accommodating modern unstructured data requirements. One of the key problems with these architectures is that, generally speaking, they were not designed to support the unprecedented counts of small files that workloads like analytics, artificial intelligence and the Internet of Things (IoT) are generating. The metadata overhead associated with serving millions or billions of files quickly renders the storage controller a performance-inhibiting bottleneck, forcing the organization to either sacrifice on performance or to invest in an expensive infrastructure upgrade before they actually need more storage capacity. This problem becomes exacerbated as more data needs to be retained for longer periods of time – especially considering that most legacy NAS architectures do not provide seamless data migration between on-premises infrastructure and the cloud.
Qumulo’s CEO, Bill Richter, will share his perspective on how file data sharing and retention requirements are changing and why a new file storage architecture is required in a live discussion with Krista Macomber, Senior Analyst for Storage Switzerland, at 1:15 p.m. EST on Wednesday July 10. Be sure to register so you don’t miss this chance to learn how to revamp your file system architecture to obtain visibility and scalability and meet your performance requirements, while at the same time staying within your budget.