Part II – Selecting The Right Storage Architecture Design

Re-Thinking The Storage Architecture for Flash

When re-thinking the storage architecture in the data center, the choice typically comes down to a few factors. The first is how much budget is there available? You can only buy the performance you can afford, so don’t bother looking at anything that exceeds your budget.

The second factor is how much total capacity do you require? You have to be able to store the data you need. Increasingly, it is not uncommon for the answer to be zero; meaning you are buying purely for performance. If this is the case, then the various server side solutions may be more appealing.

The third factor is how much performance do you need? You should, if you can afford it, always buy a little more than you think you will need because eventually you will but you should never buy a lot more than you can afford. Performance like everything else in technology will become less expensive over time.

Part of the decision making factor is determining how many servers actually need the performance boost? If the answer is a few, then server side solutions become very appealing and you may not even need the server side network solution. If many servers could take advantage of the added performance, then a shared storage system or a server side network becomes more interesting.

Storage Swiss Take

Our primary recommendation is that your next storage architecture be either flash heavy or all-flash for the performance sensitive data set. But there are several options for where to place that data in the storage architecture.

Shared flash with a reasonable network upgrade is probably the simplest most consistent way to deliver higher performance to a large quantity of servers. Server Side Flash is a reasonable choice if budget dollars are limited and only a few servers need performance acceleration. A server side storage network could provide the performance of a server side flash device with the reliability and share-ability of a shared storage solution at potentially the greatest expense.

In our next column we will discuss the second challenge that may cause an architecture change; the growth of unstructured data. The fact that this data is growing is not a surprise, it is the rate at which it is growing that is causing the challenge and traditional Netware Attached Storage architectures may not be able to keep up. It may be time to re-think that architecture and explore object storage. This will be the subject of our next column.

Click here to register for Powering The Cloud, if you are an IT end user or channel partner enter the promo code S66M13 for free registration to all three Powering The Cloud events: SNW Europe, Datacenter technologies and Virtualization World.

Unknown's avatar

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 17.4K other subscribers
Blog Stats
  • 1,979,429 views