Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) projects can all have varying types of data associated with them. Some projects consist of a relatively small number of huge files, and others have an incredibly high number of (measured in the billions or trillions) of tiny files (less than 10K). The varying types of data impact what kind of storage system the customer will use to support the projects. How the application accesses this data also impacts storage infrastructure design. IT planners are often left having to choose between a storage infrastructure that can sequentially deliver big data, or transactionally deliver small data. The problem is the AI, ML, DL, and other big data projects often need to support big and fast data, as well as small and large files, all at the same time.
In this Lightboard Video George Crump, Lead Analyst at Storage Switzerland and Charles Fan, CEO and co-Founder of MemVerge as they discuss the “Art of Big and Fast Data“.