Blog Archives

WekaIO for AI and High-Velocity Analytics

Storage Switzerland has previously discussed the problems that legacy storage file systems have when it comes to serving modern workloads such as artificial intelligence (AI) and high-velocity analytics. We have also explored the qualities that a modern file system requires.

Tagged with: , , , , , , , , , ,
Posted in Blog

AI Requires a File Storage Overhaul

Artificial intelligence (AI) is becoming solidified as an important tool for competitive advantage, used by organizations of all sizes and across industries. Legacy network-attached storage (NAS) systems, however, are not equipped to provide the levels of throughput that these workloads

Tagged with: , , , , , , , , , , ,
Posted in Blog

15 Minute Webinar – Finding the Right File System for AI and ML Workloads

Artificial Intelligence, Machine Learning and High Velocity Analytic workloads are going mainstream. Enterprises of all types and sizes want to seize the opportunity their data presents. As these workloads move from development to production, organizations face a significant challenge with

Tagged with: , , , , , , , , , ,
Posted in Webinar

Debunking AI and High Velocity Analytics Benchmark Results

Benchmarks are necessary when trying to understand the performance characteristics of a particular storage system in a particular environment. The problem is they are susceptible to manipulation by vendors to get the best marketing results. The Standard Performance Evaluation Corporation

Tagged with: , , , , , , , , , ,
Posted in Blog

Designing a File-System for AI and High-Velocity Analytics

Our previous blog highlighted the challenges of supporting artificial intelligence (AI), machine learning (ML) and deep learning (DL) workloads with legacy file systems. Control node bottlenecks, inferior (or lack of) non-volatile memory express (NVMe) drivers, and inefficient capacity utilization are

Tagged with: , , , , , , , , , ,
Posted in Blog

Lightboard Video: Accelerating AI by Solving the Storage Challenge

Artificial Intelligence workloads push current IT architectures to their extremes. For the first time both computing horsepower and All-Flash Storage I/O can be overwhelmed by AI demands. GPUs from companies like Nvidia have largely solved the computing problem but the

Tagged with: , , , , , , , , , ,
Posted in Video

Why Legacy File Systems Can’t Keep up With AI and High-Velocity Analytics

Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) workloads often start as skunk works projects within an organization. After the proof of concept and testing they move into production, which means storage performance and capacity demands for the

Tagged with: , , , , , , , ,
Posted in Blog

Storage Programmability: The Key to the Software-defined Data Center

The terms “software-defined data center” (SDDC) and “software-defined storage” (SDS) are commonly thrown around, typically being associated with the abstraction of core infrastructure functionality into a common software plane that can then be deployed on low-cost, commodity hardware. This definition

Tagged with: , , , , , , , , , ,
Posted in Blog

Designing Storage for Multi-Million IOPS Performance – Apeiron Briefing Note

There is no doubt specific applications have an ever growing need for more IOPS, millions in fact, and the number of organizations implementing these applications is on the rise. The traditional bottlenecks to achieving millions of IOPS are gone. Networks,

Tagged with: , , , , , , , , , ,
Posted in Briefing Note

Fifteen Minute Friday: Will the Software Defined Data Center Ever Happen?

Software Defined Data Centers (SDDC) leverage intelligent software to manage commodity hardware to create a flexible data center that meets performance and capacity requirements while simplifying operations and reducing overall data center costs. While the vision of SDDC sounds ideal

Tagged with: , , , , , , , , ,
Posted in Webinar