A Big Data Center without White Boxes – Storage in the Large Financial Enterprise

Google, FaceBook and many of the largest web-scale companies have made the use of commodity, ‘white box’ storage systems seem like the standard practice for large enterprises. Not so. Storage Switzerland recently spoke with a global financial institution about how they handle some of their unique challenges, such as supporting real-time compute and big-time analytics, while keeping up with data growth.

In this report we describe what some of their tier-1 infrastructure looks like and doesn’t look like – they use traditional, scale-up storage systems, NOT scale-out, commodity hardware. In this report we’ll discuss some of the challenges this company faces, what their storage infrastructure must do to meet them and why that infrastructure isn’t racks of white boxes.

These enterprises are also continually upgrading their infrastructures. In fact, the company interviewed for this report actually replaces 20% of its storage system capacity every year. And they do this without data loss, downtime or excessive administrative overhead.

In this report we’ll describe some of the primary features and functionality of their storage infrastructure. These are enterprise systems, like the Hitachi Virtual Storage Platform G1000 (VSP G1000), that can meet the company’s tier-1 capacity demands (in the petabytes) while supporting their non-stop replacement policy.

These systems support a “continuous cloud infrastructure” strategy by leveraging concepts such as storage federation and non-disruptive data migration to smooth the transition from one generation of storage to the next. They also provide the flexibility to meet unexpected demand and unplanned events, while driving resource efficiency with technologies like dynamic provisioning and dynamic tiering. Enterprises like these need enterprise-grade systems that provide the unquestioned uptime and reliability that’s taken for granted in the financial industry.

Please complete the registration information for an email copy of this report. In it you’ll learn:

  • Why Big Finance (and other large enterprises) don’t use commodity hardware
  • How they run a continuous non-disruptive replacement cycle on their infrastructures
  • Why these IT organizations operate much like public cloud providers

Sponsored by Hitachi Data Systems

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , , ,
Posted in Article
One comment on “A Big Data Center without White Boxes – Storage in the Large Financial Enterprise
  1. Tim Wessels says:

    Well, of course it would be expensive to do scale up storage this way and obiviously this particular “global financial institution” doesn’t mind spending whatever it takes to make it work. You can probably count the number of customers like this in the hundreds not thousands. For everyone else who does not need “gold plated” scale-up storage, white box, scale-out object-based storage works fine.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,231 other followers

Blog Stats
%d bloggers like this: