How IT can Leverage its Own Big Data for Operational Efficiency

OpsDataStore Briefing Note

One of the most practical use cases of big data analytics is the analysis of data that the organization is already generating, typically in the form of log file examination, which yields impressive results (most notably in the realm of security). The reality is that there is much more that the organization can glean from analysis of the data it already generates, especially in the realm of IT. Almost all the hardware and software in a data center generates data about operating conditions, in fact there is so much data that IT is faced with its own big data challenge.

The entire panoply of performance related monitoring tools deployed in IT yield an opportunity ripe for harvest. Because IT already has instrumented key infrastructure and applications with performance monitors, these are projects that can potentially start quickly and can benefit from historical data. The big data challenge then has always been two-fold: the huge volumes of such performance data, and the fact that it is stored in multiple different silos. It is extremely difficult to find the really important information.

Traditional Tools Fall Short

There are tools that report and analyze on specific aspects of an environment. There are server monitoring tools, networking monitoring tools, application monitoring tools and storage monitoring tools. The problem is that each of these tools are relatively myopic and don’t cross-correlate information. Infrastructure vendor provided tools are even worse, only providing information on a single brand of product. Also these tools tend to report on what has happened, instead of predicting what might occur. In other words, something has to break before the tool provides any real value. IT is left with a variety of tools all providing too much information without any knowledge of the other. When performance problems inevitably arise, IT administrators have to sift through the various tools and reports to determine where the problem is. Think of the endless hours costly “war room meetings” all trying to figure out what’s really wrong.

The Value of IT Big Data

One of the top challenges when starting any big data project is designing the infrastructure that will create the data to be analyzed. If the Internet of Things (IoT) will drive the project those sensors or devices need to be deployed and instrumented to generate data that can be collected and made useful. The data center, servers, hypervisors, operating systems, applications, network and storage systems have been continuously generating performance data since the day they were turned on. If this data could be harnessed, related and meaningfully analyzed then it could provide actionable intelligence that would not only allow for the quick debugging of problems but also the predictive interception of problems before they ever impacted the organization.

Introducing OpsDataStore

OpsDataStore is a company that brings the power of big data analytics to IT. The software provides real-time operational transparency across the IT stack. It collects data from various already existing server, virtual server, network, application and other performance monitors to automatically build a holistic and continuously updated topology of the entire environment – from transactions all the way to storage. OpsDataStore does not replace monitoring solutions from other vendors, what it does is bring their data together in real-time, something that is critical in the dynamically changing virtualized/containerized data centers on which IT and critical business services operate.

Most recently, OpsDataStore added wire data metrics from ExtraHop. ExtraHop automatically discovers the various elements communicating across the network stack. It then allows you to observe how those elements are performing when talking to each other, allowing you to analyze that data to proactively prevent network-related performance problems.

OpsDataStore takes the network performance data from ExtraHop and directly relates it with all the other data of the complete environment: server, virtual server, application, storage, etc. The result is a true and complete, continuous, end-to-end understanding of “what runs on what” across the entire technology stack. Once you know what is related to what, you can then do meaningful analysis, including correlation for root cause analytics (among others).

For example, OpsDataStore can automatically correlate between slow application transactions as measured by an application performance management tool such as AppDynamics or Dynatrace, with the related slow network components measured by ExtraHop that are actually at the root of the slow transactions. Or, with the slow performance of related VMware infrastructure.

The challenge has always been these silos: APM tools, Network tools, and Infrastructure tools only ever know about their “domain”; they are blind to and across all the others. For example, ExtraHop has only a rudimentary awareness of a virtual machine, and no knowledge at all about the physical server infrastructure. APM tools know nothing about the network or physical infrastructure, and so forth.

OpsDataStore continuously maps the infrastructure data from VMware information to the information it gets from ExtraHop to deliver a true real-time picture of the performance and operation of your VMware related infrastructure. If the applications running in that environment are also instrumented with APM tools, that continuous real-time visibility is extended all the way to the applications and transactions.

StorageSwiss Take

In the modern data center, data is cheap. Almost every system produces more raw data than a human can process. OpsDataStore transforms that data into knowledge by automatically relating, analyzing and correlating all that data, and transforming it into actionable information including visual topology maps that IT professionals can use to proactively manage their environment. It unites all the monitors into a holistic hole.

Eight years ago George Crump, founded Storage Switzerland with one simple goal. To educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is and a heavily sought after public speaker. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , ,
Posted in Blog

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,768 other followers

Blog Stats
  • 1,083,686 views
%d bloggers like this: