Planning For The Next Five Years of Performance

In my last few columns, I have discussed the value of scale-out and scale-up storage systems. I’ve also discussed the potential for All-Flash systems to last twice as long as hard drive based systems. In those columns I’ve tried to give some guidance on how to select the storage architecture that makes the most sense for you. One of the key elements in that selection process is making sure that any storage system type you select will have the potential to meet your performance demands over next five years. The problem that I think many data centers will face is determining what their performance demands will look like in that time frame.

In his article, “Are You Planning For Storage Performance?”, my colleague Colm Keegan outlines the value of planning for performance and a process to get there. This process is also very valuable when deciding between scale-up or scale-out storage. The key is capturing the performance requirements of each workload in your environment and then having the ability to replay those workloads against any new system that you may be considering.

Predicting The Future

The challenge with projecting the performance needs of the environment five years into the future, is that IT has limited insight into what the business will look like in five years. IT can only articulate the total potential IOPS of a system but to application owners, technical jargon doesn’t help them understand how their application’s workloads will scale with a given storage platform.

What IT needs to be able to do instead is articulate the performance capabilities of any system being considered and specify, its ability to support future workload growth. For example, the goal should be to describe the performance of a new system in terms of how many more virtual desktops per host it could support, how many more users per application it could support or how many applications per host it could support. These are terms that the business application owners could more easily understand and build into their own forecasts.

With the right tools, like Colm describes in his article, these different workloads could all be run against the storage system at the same time. Then these different workloads can be “dialed-up” so that more of a certain type, or various types for that matter, are run against the potential new storage system. This would allow the storage planner to give guidance to the rest of IT and the organization as to where different systems under consideration would scale to.

Scale-Out Verification

The ability to capture workloads and replay them would also allow the storage planner to understand how a scale-out system performs. There are two performance points to understand. The first is the total performance of the initial cluster; since most systems need three or more nodes to get started. The second is what the performance increase would be if just one more node is added.

A workload simulation tool would allow you to dial-up workloads until you knew each of these answers for your specific environment. First you could dial-up workloads until you knew the maximum capabilities of the initial cluster. Then you could add one more node to see how many more workloads could be supported. A tool like this would also allow you to verify the common scale-out claim of linear scaling.

Scale-Up vs. Scale-Out (again)

As I stated in my original column, what some storage planners may find is that the performance of the single scale-up system being considered and the performance of the initial scale-up cluster, far exceed the likely performance demands of the data center for the next five years or more. In other words, you are not going to have any need to scale your scale-out system. If that is the case, why even consider it? Unless there is a feature that you must have that is only available in that scale-out storage system, multi-tenancy might be a good example.

Conclusion

As Colm pointed out in his article, “Are You Planning For Storage Performance”, performance planning is not only important to understand the limits of your current system, it is also critical in selecting the right system as you go through a storage refresh as well as understand when that new system will reach its limits. This is especially true  today because there are so many storage architectures to choose from.

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,142 other followers

Blog Stats
  • 1,485,269 views
%d bloggers like this: