Business Outcome Driven Storage

The new mantra for IT is to focus the technology and services their data centers provide on business outcomes. That concept seems more easily applied to front-line and customer-facing applications, like customer relationship management, decision support and Web servers that interact with customers and prospects. But how do “business outcomes” relate to storage, which is essentially the plumbing of the data center? How can plumbing be realigned to help deliver positive business outcomes?

Part of the problem is that storage is a “top of the whiteboard” issue for many CIOs, because of its impact on the IT budget and the user experience. The protection of data and the importance of uptime, even through the worst of disasters, make storage a primary concern. The best way for storage to support business outcomes is to alleviate that concern. But in order for storage to relinquish its place at the top of the whiteboard it has to just work, no matter the circumstance.

Part of “just working” is offering performance so high that the storage system no longer needs to be tuned. It also needs to automatically keep data on the most cost-effective medium at any given point in time. Finally, it should protect itself from any disaster situation, again, automatically.

Step 1: All the Performance You Want but No More Than You Need

Storage, for once, has the capability to be well ahead of the performance curve, thanks to memory-based technologies, most notably flash. While there are minor exceptions, enterprise storage systems coupled with the right amount and type of flash can offer all the performance that high-user-count databases and dense virtual environments need.

For performance to become a nonissue for the majority of data centers, it has to be implemented in an optimal fashion. Most vendors simply treat flash as a fast disk drive by leveraging solid-state drives (SSDs) that repackage flash into a hard drive form factor. While these implementations solve the short-term performance problem, over time, as data centers continue to scale, their weaknesses may be exposed.

Some enterprise storage vendors are moving beyond the “fast drive” concept and implementing flash as it should be implemented: as memory storage. Hitachi Data Systems (HDS) for example, provides flash module drives (FMDs), a flash storage device built specifically for the most demanding enterprise-class workloads. The FMD features a custom-designed, rack-optimized form factor and innovative flash memory controller technology. These features let the module achieve higher performance, lower cost per bit and greater capacity, compared to the conventional drive form-factor SSDs on the market today.

It is unlikely that most data centers will catch up to the performance capabilities of flash, but if and when they do, the next level of performance is already clearly defined. Unlike hard disk technology that hits a performance wall, memory-based storage can move beyond the performance range of NAND flash by incorporating non-volatile memory (NVM), which will offer DRAM-like performance with flash persistence.

One of the most time-consuming activities that IT undertakes is tuning and re-tuning the storage infrastructure to support greater database scale or greater virtual machine (VM) density. If the modern enterprise storage system can correctly leverage flash to its full potential, then the large majority of these tuning tasks are no longer needed: IT is then free to focus on business outcomes instead of just keeping things running.

Step 2: Store All Data Forever, Affordably

There is now more reason than ever to keep all data forever, or at least for a very long period of time. There are examples, every day, of organizations repurposing old data to create new products, provide better support or to simply improve efficiency. There are also many examples of companies that aggressively remove data in order to limit its perceived liability, and end up deleting the very information that could have prevented legal action against them in the first place. The challenge is to keep this data stored in a very cost-effective manner and to manage the movement of that data as the speed for recall subsides.

However, we’ve had these initiatives in the past. But Hierarchical Storage Management (HSM) and Information Lifecycle Management (ILM) were such utter failures that their mention can send chills down the IT professional’s spine. Fortunately, the industry has progressed a long way from these premature attempts.

A key challenge is that while each data type has an ideal storage platform, what’s considered “ideal” will change as the data ages and becomes less active. Creating a single storage system that does it all results in a “jack of all trades, master of none” situation. There are simply times where data needs to be on high-performance block storage and others where it needs to be on high-performance NAS storage. There are also situations where object storage is more appropriate and even times where bulk storage, either in the cloud or on tape, makes more sense.

Now, thanks to storage virtualization and software-defined infrastructures, data can be moved seamlessly between these storage systems. Also, advances in indexing as well as the raw processing power now available to churn through data make finding the right data at the right time much easier, regardless of its location.

This instant access to right data is another significant timesaver for IT. It eliminates the scramble for old data when requested and makes sure that data remains on the most cost-effective storage tier for its current value. Lowered costs of storage means more IT budget for business outcome initiatives.

Step 3: Continuous Protection From Any Disaster or Failure

Disaster can strike in a variety of ways: an application corruption, a server crash, a storage system failure or a site-wise outage. Each situation requires a different recovery method, but success comes from one primary ingredient: PRACTICE! The challenge for today’s stretched-too-thin IT staff is finding that practice time.

One solution is to create a continuous protection infrastructure so that data and applications can be seamlessly moved from storage system to storage system and from data center to data center. This approach includes leveraging the public cloud.

In a continuous-protection design, data is snapshotted on the primary storage system and then replicated (synchronously or asynchronously) to a second system in the data center. It’s then replicated to a secondary site and, finally, if it makes sense, to a secure cloud provider. With near-real-time copies of data available in each location, the applications can be brought up in a test mode on an ongoing basis. Some organizations take this approach to the extreme and actually move their applications between these locations regularly. With this continuous protection method, “practice” is built in.

The ability to make these rapid moves of applications is enhanced by virtualization, which containerizes each environment, and by improved snapshot and replication capabilities, which now can move clean copies of data as they change. This technology also leverages the availability of increased bandwidth, making remote access almost as good as local access.

Building practice into the data protection process eliminates the disaster recovery drills that seldom lead to a positive result but still consume precious IT time. Continuous protection also eliminates the redundant “belt and suspenders” approach to data protection by finally eliminating unneeded backup jobs.

Conclusion

Storage can best impact business outcomes if it can get out of the way, or at least make its operation less of a concern. The best way to accomplish this is by making sure data is always available, always protected and in the right place at the right time. In this way, IT can provide instant answers to demanding applications and users while delivering a cost-effective storage methodology that reduces the storage drain on the IT budget. Finally, continuous protection eliminates the greatest fear of an organization: recovery when disaster strikes. Continuous protection makes that recovery a known quality.

Sponsored by Hitachi Data Systems

Click Here To Sign Up For Our Newsletter

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,224 other followers

Blog Stats
  • 1,539,390 views
%d bloggers like this: