Storage Q&A: Is The Next Generation Data Center the “End-Game” for IT?

The next generation data center is here. Highly virtualized servers, extreme VM density, thriving on flexibility, low cost and efficiency. So that’s the thumbnail look at this new data center concept. Here to talk about it in a little more detail are Jay Prassl from SolidFire and Storage Switzerland Senior Analyst, Eric Slack.

I started our interview by asking Eric what is the next generation data center.

Eric: Essentially, it’s really a cloud or cloud-like environment that’s characterized by extreme VM density, which is high virtual machine-to-host ratios. It’s a centralized IT infrastructure that allows these organizations to really take advantage of large aggregated pools of resources that support really high levels of compute activity.

It really helps maximize resource efficiency which drives costs down. A lot of organizations, of course, have to make money. They also leverage automation in the infrastructure to improve efficiency and reduce operational costs. One other thing to mention is they’re really not just cloud providers.

Enterprises also see the advantages of the next generation data center model. They’re setting up internal private clouds to provide a self service IT model for their employees to minimize costs and improve responsiveness, which has always been something IT has tried to do for companies.

Jay, is there anything you want to add to that? Or is there something else that comes to mind when you guys say next generation data center?

Jay: Yeah, I would add just a few things to that, Eric.

Really at the core, the next generation data center is the framework to think about how both enterprises and cloud service providers are computing, and also offering resources to their internal and external customers. We use four descriptors that help us nail down the next generation data center concept, and I think at the core is this idea of agility. It has to be flexible and able to be deployed in a very agile way.

That system must also be able to scale, it should be automated and have the properties of end user self service, if the customer wants to generate that. Finally, the resources provided should be delivered not only predictably, but they should also be guaranteed over time. You can apply next generation data center constructs not only to storage as we’re talking about today but it’s actually applicable at the server and networking level as well. It truly is a data center point of view.

Watch the webinar “Storage Requirements For The Next Generation Data Center” available on-demand.

Charlie: Eric, why are the next generation of data centers really the end game of IT?

Eric: Well, when you say “end game”, we’re talking about an ideal or a model of extreme consolidation of resources and control and a model of flexibility and automated operation. This enables the maximum performance – bigger compute, more data, etc., maximum efficiency and minimum cost.

Some cloud providers are closer to this ideal but the point is the next generation data center is really a model and a direction that IT can really strive for. Similar to the way that the cloud is seen as an eventual delivery model for IT in the larger sense, for IT services, the next generation data center is the physical infrastructure that really enables that.

Jay: Now I would add the idea that if you look at the end game of where enterprises are going to compute, and how they’re going to compute long term, I think there’s a really excellent benchmark out there in the ecosystem today, which are large scale public cloud providers. These companies are able to offer resources –  storage, compute and networking –  on demand, in a way that is not only highly efficient but also very profitable to them as a company.

Enterprises are out there looking at large cloud providers as a leading indicator on how to compute over time. Some of that pressure is simply being placed on those companies from within.

You have individuals who go outside of the firewall because they can get resources from someone like Amazon or RackSpace, or software in a very quick and easy manner. So I think a lot of large enterprises that we work with are starting to look at how they can bring the economics of large scale cloud computing or on-demand computing, and bring it inside their data center. And that’s a big part of what the next generation data center is all about. That’s one of the key drivers to this trend in how IT resources are to be delivered.

Charlie: Jay, how do next generation data centers break storage?

Jay: You can look at this from a couple different ways. When you look at the next generation data center you have to look at it holistically, a lot of it has to do with application deployment times.

You have companies out there that are trying to deploy applications in a very quick manner, whether they’re trying to develop something fast or bring an external product or service to market. The speed at which they can do that has a direct impact on their bottom line. The ability to make the resources to do that [available quickly] is one of the key drivers in the next generation data center.

So if you take this combination of agility and combine it with automation and self help, storage is simply one aspect of that. What is happening though, is that this is putting a lot of pressure on storage. If you look at legacy storage systems, they’re terrible at delivering storage performance on demand and terrible at delivering automation in rapid scale for those resources.

Charlie: So I guess this means that because there are some changes that storage companies need to make, there’s a lot of work that they have to do to keep up.

Jay: Yeah, that’s definitely the case. If you look at traditional, monolithic storage architectures they’re not designed to be as agile as [they need to be] to meet the demands that are put upon them. I’ll give you an example: simply changing storage performance very quickly. That is something that takes days, or potentially weeks, if you have to move a volume off one system and onto another. The next generation data center is about being able to react to that instantaneously.

Eric: That’s interesting, Jay. So, you’re talking about agility and really the speed of change, is that something storage is going to have to do in an automated fashion?

Jay: Yes, I think storage will need to react to the changes that people are going to put upon it. So if somebody wants greater performance, you’re going to have to be able to react to that. Not only to provide the application more performance, but in the shared multi-tenant setting, you need to be able to [make sure] you’re not affecting the performance of everyone else. That’s where quality of service as an architecture is a very important concept, within the next generation data center and within storage in particular. When it’s built-in at the very core, you have the ability to create the high levels of virtual machine density that we talked about at the beginning of this podcast, and offer that kind of predictable experience to the applications that are receiving that storage volume and storage performance.

Charlie: You can get more information about this topic from our On-Demand webinar “The Storage Requirements for Next Generation Data Centers”. Jay will be here along with our experts from Storage Switzerland to talk to you about what’s going on with storage and the next generation data center. Thanks for joining us.

Watch On Demand

Watch On Demand

Click Here To Sign Up For Our Newsletter

Tagged with: , , , , , , ,
Posted in Storage Q&A

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,209 other followers

Blog Stats
  • 1,529,107 views
%d bloggers like this: