StorageSwiss Report 64 – A Decade’s Worth of Predictions

The StorageSwiss Report is a weekly discussion about hot trends and topics going on in the storage, cloud, and data protection markets. We don’t just cut & paste press releases. We provide insight as to why or why not the news is vital to IT professionals. The report is sent exclusively to subscribers of our opt-in, low volume, newsletter. The exception is January, where we also post them to our site StorageSwiss.com. If you like what you are reading, be sure to subscribe so you don’t miss exclusive content for the rest of the year. Last year we published 63 newsletters that were privy only to our subscribers.

In January’s StorageSwiss Report’s, we are providing a decade worth of predictions for storage, data protection, and cloud. For our first report in January, we are focusing on storage in the 2020s. However, before covering how storage will change over the next ten years, it is essential to understand how the data center will evolve during this time as well.

Data Centers in the 2020s

By the end of the decade, artificial intelligence (AI) and machine learning (ML) won’t be a separate set of projects or workloads. Vendors and applications will build AI/ML into every aspect of the data center, infrastructure, applications, and storage systems. The concept of AI-everywhere fundamentally changes storage and data protection infrastructure. It also changes how an organization uses the cloud. Organizations won’t get to AI everywhere overnight, though. It will be a long process that takes most of the decade. Most organization’s data centers over the next three years will still look very similar to how they look today.

The impact on IT is that they need to continue to solve today’s problems and challenges while at the same time building an infrastructure that carries the organization to where it needs to be at the end of the decade. For example, today, IT faces a challenge with its VMware storage infrastructure. In VMware or other hypervisor environments, IT doesn’t necessarily need a faster storage system. It requires one that is easier to operate, more automated, and can more efficiently provision storage resources while guaranteeing the quality of service each virtual machine needs.

Another significant change to the data center is going to be how the public cloud impacts them. Will there even be data centers? Will the hybrid data center indeed be the norm, or will organizations look at the cost of their monthly public cloud bill and decide the can do it cheaper on-premises? Public cloud in the 2020s is the focus of the next StorageSwiss Report.

Storage in the 2020s

In the early part of the decade, production storage systems will polarize between extreme performance solutions and “good-enough” performance solutions. The extreme performance solutions will target modern workloads like AI, ML, and other big data analytics workloads. The good-enough solutions will focus on core, often virtualized, mainstream workloads, and eventually containers. In the early part of the decade, expect most workloads to continue to run in virtualized environments, so IT needs to solve challenges related to VM storage as we discuss in our article “Why VMware Storage is STILL a Problem.” It is critical that vendors not get so excited about the AI/ML storage opportunity that they forget about core, mainstream workloads.

By the middle of the decade, the shift to production containers should be well underway, thanks in large part to work done on the container storage interface. Production containers mean that workloads can spin up, down, and scale-out in a matter of seconds. IT won’t be able to afford to design architecture for the worst-case scenario. Instead, they will need to use orchestration and automation to balance resource utilization continuously.

High-Performance Computing (HPC), as we indicated in our article “The Commercial HPC Storage Checklist,” is already on the rise. AI and ML will accelerate the adoption of these architectures. AI architectures are different than typical HPC architectures. While HPC storage architectures are an excellent starting point, IT won’t be able to copy the designs 100%. They will have to adjust for the capacity requirements of AI/ML and the higher return on performance investments. There is going to be a debate on which architecture is best. As we discussed in our article, “Are All-Flash Arrays All Wrong for AI and DL Workloads?”, some object storage vendors are positioning to be the AI storage infrastructure of choice. They claim that highly parallel use of hard disks is a better solution. At the same time, traditional HPC storage vendors are optimizing their systems to be AI/ML ready. In either case, it is important to understand the requirements of AI/ML workloads, and our article, “Understanding the Challenges That AI at Scale Creates”, is an excellent starting point. Another good resource is Episode 9 of our Storage Intensity Podcast series during which we sit down with Panasas’ Curtis Anderson Episode 9 – Is HPC Storage Right for AI and ML? — Storage Intensity

In all three use cases, organizations also need to work through how much flash to use, if Intel Optane or some other non-volatile memory technology is right for them and which protocol they should use (file, block, or object). A future entry in our 2020s prediction series will cover how to make the right decision on these important points. One thing is sure, NVMe will replace SCSI, and unlike our other predictions, the replacement will happen quickly. SAS based flash media will be almost non-existent by the end of 2021, and NVMe over Fabrics (NVMe-oF) will dominate the storage infrastructure by 2025.

Check Back Soon for the Next Report of the 2020 Prediction Series, which includes:

  • StorageSwiss Report 64 – Storage in the 2020s
  • StorageSwiss Report 65 – The Public Cloud in the 2020s
  • StorageSwiss Report 66 – Data Protection in the 2020s
  • StorageSwiss Report 67 – Secondary Storage in the 2020s
  • StorageSwiss Report 68 – What Storage, Network and Protocol Wins in 2020

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,257 other followers

Blog Stats
  • 1,590,007 views
%d bloggers like this: