Is it Time for Application-Focused Storage?

Service level agreements (SLAs) for application availability for business continuity, and for performance, are more demanding than ever before. Each application often has specific SLA requirements, which results in multi-vendor, heterogeneous IT infrastructures that are challenging to manage. As a result, storage professionals are overwhelmed with the day-to-day tasks of managing infrastructure. They don’t have time to become a strategic teammate to lines of business by ensuring that application SLAs are not only met but exceeded. The modern storage manager requires a universal infrastructure that lays a foundation for high availability and Tier 0 levels of performance regardless of where data lives. This foundation should also streamline the orchestration of applications and data across a range of underlying storage infrastructure resources.

Overview of Business Continuity and Performance Requirements

Previously, businesses typically relied on a small number of applications for truly mission-critical functions. Today, the number of critical applications has multiplied exponentially, and they have SLAs that allow for almost no downtime. High availability capabilities including fast failover are required universally, whether the application is running on bare metal, virtualized, containerized or public cloud infrastructure, to facilitate business continuity.

At the same time, a number of applications now require unprecedented levels of performance. Flash storage solves much of the performance challenge, but the storage manager must be discerning in terms of how they use flash storage capacity. Flash capacity is still at a price premium compared to hard disk drives, and introducing new flash-only arrays into storage environments is an expensive and time-consuming task. Not only must new production storage arrays be procured, but storage managers may need to learn a new software management framework. Flash capacity should be used intelligently, when truly required per the application’s SLA, to optimize return on investment (ROI).

Challenges of Orchestrating Complex Modern Technology Stacks

The challenge with meeting more demanding SLAs is the diverse range of infrastructure resources that are required to meet the increasingly diverse range of application cost, performance, compliance and control requirements. One size truly does not fit all. Not only do solid-state drive (SSD) and hard-disk drive (HDD) storage media need to coexist, but block, file and object storage access protocols do as well. At the same time, various operating systems, and bare metal machines, virtual machines and containers are all being leveraged. Meanwhile, the advent of infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS) cloud-based delivery models extends the storage environment-and the need to manage it-beyond the on-premises data center.

For many enterprises, this heterogeneity has stranded applications and data on disjointed silos of infrastructure resources. Storage managers are left in a position where they cannot deliver fast access to data when applications demand it. Migrating applications across these various storage silos typically requires massive investments not only in hardware but also in IT staff time. Migrations take a long time, and they risk downtime for applications and data. There is also a high chance of human error during migrations which leads to application outages after the migration is complete.

How Application Disaggregation Can Help

To deliver mission-critical levels of availability and performance while simultaneously providing more agile and streamlined data and workload mobility, storage planners may consider using virtualization to decouple applications from storage infrastructure resources. Such an approach creates the opportunity for high availability services, such as application monitoring and virtual machine (VM) reboots, to be standardized irrespective of how and where that application is hosted. It also enables storage resources such as solid-state drives (SSDs) to be pooled and allocated based on application SLAs. A virtualized approach can also enable migrating data more easily across infrastructure resources without an impact to application performance.

For its part, Veritas’ InfoScale product takes what it calls an “application-centric” approach to storage virtualization. From a business continuity perspective, it focuses on facilitating application visibility and control to enable high availability. For example, it provides event-based monitoring for instant notifications regarding a change in application health. It can then automatically trigger remediation based on those state change notifications. In terms of facilitating flash-driven performance cost effectively, it applies intelligent caching to attach flash drives to any existing shared storage implementation based on an application or an environment’s quality of service (QoS) requirements. As a result, the enterprise can obtain performance acceleration where it is needed, while simultaneously better optimizing and utilizing their all-flash array investments.

Complementing these capabilities, InfoScale provides a common platform to migrate applications with less friction, because data is optimized across various infrastructure resources behind the scenes. Notably, InfoScale supports all of the major public cloud platforms and is available in the AWS marketplace. Customers have the flexibility to optimize data placement according to cost and control requirements. At the same time, they can more easily integrate new use cases such as cloud disaster recovery.


The role of the modern IT professional is to enable the business to derive new value and competitive advantage from IT. In order to achieve this objective, IT professionals cannot be bogged down with system-level management tasks across disparate silos. Instead, they must be able to focus on the health of the application. Decoupling applications from infrastructure can help to enable more consistent availability, more optimized performance, and better orchestration. Applications can run more optimally in terms of reliability, performance and cost efficiency, while simultaneously streamlining the management burden for IT. Storage resources should be pooled and allocated more dynamically, so that hardware is better utilized. Additionally, data migrations across heterogeneous environments can happen more seamlessly, without risking application uptime or performance and with less hassle for IT.

Armed with the right solution, the IT professional has confidence that applications are performing up to expectations and that they have the capacity that they need.

Sponsored by Veritas

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

Senior Analyst, Krista Macomber produces analyst commentary and contributes to a range of client deliverables including white papers, webinars and videos for Storage Switzerland. She has a decade of experience covering all things storage, data center and cloud infrastructure, including: technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her previous experience includes leading the IT infrastructure practice of analyst firm Technology Business Research, and leading market intelligence initiatives for media company TechTarget.

Tagged with: , , , , , , , , , , , , , ,
Posted in Article
2 comments on “Is it Time for Application-Focused Storage?
  1. This is written is so much marketing jargon that it’s hard to read. Any chance of writing it in English for the tech people? No, thought so – this is aimed at the C levels isn’t it?

    No wonder IT is in such a mess.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: