AI Needs an NVMe-Optimized File System

Analytics is evolving from big data, machine learning to artificial intelligence. Machine learning is the analysis of data at rest, artificial intelligence (AI) is the analysis of data in real-time. Machine learning is predictive; AI is cognitive. The requirements of a storage infrastructure supporting an AI environment are high bandwidth, low latency, elasticity in response to workload demands, and rapid response to multiple parallel analytic queries.

Learn about AI and Storage on our on demand webinar: “Three Reasons Why NAS is No Good for AI and Machine Learning.”

Traditionally, most AI initiatives start as skunkworks projects often hosted in the cloud. Then as the project moves into production, the environment is handed over to IT, which tries to use existing storage and file system technology to host it. The problem is, legacy storage technologies come with rigid performance and scalability features. Also, some organizations want to occasionally leverage cloud resources, so confining AI to the four walls of an existing data center is an ineffective strategy. At the same time going cloud-only doesn’t make sense given the long-term costs of storing massive AI data sets in the cloud. As GPUs continue to decrease in price and increase in power most organizations prefer an on-premises approach to AI workloads.

Many AI projects leverage GPUs or cloud provider resources to aid in the rapid analysis of data. The challenge for most organizations is how to build a storage infrastructure that supports deep learning as data sets grow and as the infrastructure expands beyond a GPU server or two. Shared file systems exist today, but they are not optimized for the metadata intensive, small file I/O requirements of AI workloads and they can’t support the temporary busting of workloads into the cloud. Organizations need both a file system that can saturate the on-premises compute layer and the ability to burst AI workloads into the cloud as the on-premises infrastructure is built.

Most file systems are also not optimized for the high-performance storage technologies like NVMe Flash, they don’t provide the right level of data protection and the don’t support temporary cloud data movement.

While many organizations are starting their journey to AI, they aren’t aware of the unique requirements that it places on the storage infrastructure. In our on demand webinar, join Storage Switzerland, WekaIO, and HPE for a roundtable discussion on the shortcomings of traditional file systems and explain why they are ill-suited to meet the demands of AI. Then we provide a checklist of capabilities that IT professionals should look for in an AI storage infrastructure.

Watch On Demand

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25.5K other subscribers
Blog Stats
  • 1,939,356 views