Storage Switzerland and Tegile teamed up on a webinar entitled What’s best for VDI: Hybrid or All-Flash Storage. Participants in the live event were able to ask questions to Storage Switzerland’s George Crump and Chris Tsilipounidakis from Tegile Systems. Here is the transcript from the Q&A section of the webinar.
“How do customers choose between Hybrid and Flash? Can you use both?”
George Crump: When I talk to an end user, they’ll say, “I’ve got $250,000 allocated for storage this year.” Well if you can get an All-Flash array to meet the capacity demands that you have with that budget, then go for it. There’s no question that it does eliminate performance tuning and you don’t have to think about where you have to place workloads because they’re all going to be really well placed. If that’s not you, if you can’t afford an All-Flash array within your budget or there’s other reasons, then hybrid arrays are definitely less expensive per gigabyte. They’re more logical in a way because, let’s face it, most of the data in the data center is not active so putting it on a hard disk is fine. The question I always get asked is if there’s a tier miss or a cache miss what’s performance going to look like? If you’re just experimenting, that’s fine. But if you’re genuinely concerned about that then I would lean towards either an All-Flash system or a hybrid system with an “extra flash”, if you will, in it.
We actually did a webinar just a few months ago that you can look up on storageswiss.com that talks specifically about how you can get All-Flash like performance from a hybrid system. It’s fair to say that a hybrid system will require a little bit more work. I would want to go in and make sure that certain virtual machines and certain virtual servers, especially as I’m doing my Oracle, are always going to get flash performance. I think the other big thing to look at here goes back to that unified architecture. Will you benefit from having both block and file from the same system?
Chris, what are your thoughts?
Chris Tsilipounidakis: George, I couldn’t agree with you more. Being a sales engineer over the past eight years, and specifically with Tegile over the last couple of years, I think the number one consideration is your use case (specifically VDI) dictates the type of storage on the back end that will provide the most benefit AKA either All-Flash or hybrid, just like George talked about. There are some considerations, though, that my customers need to flush out before we can start talking about which storage solution makes the most sense. For example, average IOPs per desktop multiplied by the number of desktops in the pilot. Light task users are 5-10 IOPs per gigabyte whereas heavy task users are anywhere from 10-40 IOPs per gigabyte, multiply that by the number of users today. But also understand in the future you’re probably going to want to increase the number of desktops.
So to George’s point, there are some inherent benefits to running VDI on a hybrid deployment. The cool thing about some storage technologies is the ability to, in real time, dial up the amount of SSD of Flash in a hybrid deployment without taking that application or use case down for any period of time. Now some of my customers love the “new shiny toy” and it doesn’t matter to them if they’ve got performance requirements that dictate them being on a hybrid solution that will more than sufficiently support their environment, they want the new, shiny All-Flash array. And to George’s point, if cost is not a consideration, All-Flash makes a lot of sense.
My only caveat with All-Flash – and this is something for customers to consider who are looking to deploy VDI on All-Flash – is you should have a pretty good idea of what you think you’re going to be doing from a scalability perspective six months, one year, and three years down the road. If you think that you’ll continue to increase the number of virtual desktops in your environment, then All-Flash makes sense.
Here’s the way I weigh things: All-Flash is great from a cost per IOPs perspective, but hybrid is better from a cost per gigabyte perspective. Especially when you have a storage vendor that has a hybrid technology with algorithms that inherently manage the placement of where the data should reside in real time in cache. And to George’s point, the whole point of a hybrid array is to maintain a high cache hit ratio. We do not want the read requests going down to disk.
Here’s a benefit of introducing in-line deduplication and compression: I have a physical amount of flash that’s 2TB, but I’m doing in-line and compression and achieving a 50% savings, so I can theoretically take that physical size of 2TB and turn it into a 4TB logical space because I’m only storing unique instances of data. So effectively I’m getting the same performance and cache hit ratios that I would get from flash, but instead I’m getting it from a hybrid array. Some storage vendors have both the hardware intelligence and the software intelligence to support that.
That’s a very long winded answer, but basically at the end of the day your use case dictates whether you should go All-Flash or hybrid, and then factor in what your budget is for the project to support that.
“What are some good ways to test how many virtual desktops I can have per host or per storage system?”
George: To Chris’ point, you’ve got two things you need to look at when considering VDI performance. You’ve got the average – (the number of IOPs a desktop needs on average – and then you’ve got this peak thing. With peak, you’re acceptance rate will be judged more on how your system responds to peak than you will be judged on how it responds to average. If you’re giving me average performance while I’m working on my Word doc that’s great, but if it takes a minute or more to launch Word, you’re going to get some complaints. There’s a peak demand now all of a sudden.
There’s a couple of really good testing tools to use here. One is very specific to the VDI environment, to some extent it almost perfectly simulates it, and it’s Login VSI. However, there’s some work involved in setting it up, And another one is a solution by a company called Load DynamIX, this is an appliance where you can simulate some sort of VDI set up. Both these tools will do variable bursts throughout the day. Like Chris said in the webinar, this is a very bursty environment and we want to make sure, at burst, that we can deliver the performance we need. We can’t design for burst because it’s too expensive. So you want to make sure your system can handle the variance of bursts throughout the day. So those are my thoughts.
“Any thoughts on performance measurement and performance testing, Chris?”
Chris: Oh yes. When I talk to customers about implementing the right type of storage configuration, it can be hairy. And what I mean by that is it’s like a prong, it’s like a fork in the road. First it needs to be a consultative conversation with the vendor, the storage engineer for example, to have on the customer side to understand what their plans are for their VDI pilot. Second, you need to identify the number of desktops, the type of desktops being used, what those desktops use, the scalability plan six months or a year down the road. Are we going to be doubling the amount of virtual desktops? What is the connection broker you’re going to use? Is it ZenDesktop, is it Horizon View?
All of these things play into the next conversation, which George eloquently put, you’ve got LogIN VSI, and you’ve got Load DynamIX. I’ve used both for customers that have used Tegile in POC’s. What I like a lot about Login VSI is once you get your idea of the number of used desktops, and get an idea of the bursty workflow in your specific environment, you can emulate that from a Login VSI perspective in a POC, and get a pretty good idea of what the storage array is going to be capable to support. My only suggestion is, if you do engage in a POC, make sure that you identify what your requirements are so that the storage array that’s being used in the POC is purpose built specifically to support those requirements that you have for your VDI deployment.
George: This is not an environment where you run ioMeter and call it a day, it just won’t give you any kind of accurate results.
“Will we see companies building a hybrid system that uses multiple types of flash? i.e. SLC and MLC”
George: First of all, we have seen this. But I don’t know that there is as much value in using SLC and MLC as there was two or three years ago. We get really good durability out of MLC nowadays, especially with an All-Flash array because you’re just not rotating flash as quickly. I do think that where we’ll start to see hybrid based memory systems will be more in terms of when we have another type of memory out. You may have heard of NVME (Non Volatile Memory) which will give you a DRAM like performance and a DRAM like durability but also a Flash like sustainability, meaning it won’t lose data when power is removed from the system. We might see some companies venture off into a MLC/TLC configuration, however I don’t know if we’ll see a lot of that though. But I do think as we get more into the next generation of RAM based memory that’s when we’ll see that. And frankly I do think Hybrid vendors will have an advantage in that area because they’ve already managed moving one type of storage to another. For them whether it’s NVME, Phase Change, or Flash it should be a pretty straight forward deal. Chris?
Chris: I couldn’t agree more. Hybrid, traditionally, has meant disk plus something else. It doesn’t necessarily have to mean that down the road. If you look at storage architectures that have an underlying operating system that’s extensible enough they can basically allow the introduction of what George just said. Which is that non volatile memory within the storage array. And then replacing the hard disk on the backend with commodity grade MLC or TLC, which is an even more efficient version of commodity SSD. At the end of the day, I think the number one component from a storage perspective is understanding what the customer requirements are, and identifying the particular components that will support those requirements in a performance oriented and cost efficient way.
My whole point is, Tegile’s architecture is built on the premise that we’re not fixed from the perspective of using a specific type of storage media. If down the road we notice there’s enough of a customer need, and the industry is heading towards NVME based storage arrays with CMLC on the back end, that’s something we’re absolutely going to look at doing that. But to answer this question, I think the number one consideration is to look for a storage infrastructure that is flexible enough on the back end to swap out those storage media components without doing a significant rip and replace, or asking you to go from one storage architecture to another.
“Do you support SMB3 with 40GB Ethernet or Infinaband?”
Chris: We do support SMB 3.0, absolutely. We currently do not support 40GB Ethernet or Infinaband just because we do not have enough customers in those industries. So we support all verticals, but we don’t see enough of our customers wanting to deploy 40GB Ethernet or Infinaband to specifically have our engineering team focus on that from a protocol network perspective. But we absolutely do support SMB 3.0, especially for customers running Windows Server 2012 with Hyper-V.
“I hear Flash is expensive, how is Tegile different from the other vendors?”
Chris: We’re using enterprise grade MLC, and a lot of the storage vendors out there are using commodity grade MLC. They’re placing the Flash wear endurance software in their operating system which causes a bit of overhead. The specific SSDs Tegile is using are enterprise grade MLCs which allows for an order of magnitude higher wear endurance than commodity grade MLC. We’re talking Petabytes as opposed to Terrabytes of write endurance. The companies that we source our SSDs from is HGST, which recently has been purchased by Western Digital and SanDisk. The interesting thing about both of those SSD manufacturers is that they’re both also strategic investors in Tegile. So they’ve invested in Tegile, and we are actually getting from a COG (cost of goods sold) perspective the enterprise grade SSDs at the same COG that a commodity grade vendor would get an SSD.
You can see the entire webinar On Demand by clicking here.