Recent research by IDC predicts that IT professionals will be managing 4-5 times more data per person by 2020, but only 5% of that data will be important. So while a lot of data is being created, it’s not all created equal. Ideally that 5% should be stored on high performance flash and the rest on cost effective hard disk drives. The question is who or what is going to manage analyzing that data and moving it between these tiers and how can critical applications be assured of consistently high performance?
NexGen has been focused on this question since their inception, almost five years ago, with an emphasis on building storage systems that can automatically move data between storage tiers and, more importantly, guarantee quality of service (QoS) for the most important applications. Since different data has different value to the business, understanding more about that business value would seem to be crucial to storage system design. This is the approach NexGen has taken with their third-generation architecture, the N5, which provides over 15TB of flash capacity per system and features a new caching technology the company claims can increase flash utilization by 150%.
The Business Value of Data – A Moving Target
NexGen commissioned their own research into the business value of data and found out that for most companies, that value changes over time. They also learned that few companies are actually managing data based on this value because it’s hard to do, since that 5% is a moving target. The data comprising that most important 5% can be different today than it was yesterday. To address this problem, NexGen has developed a new approach they’re calling “Value-driven Data Management”.
Value-driven Data Management
Storage systems have evolved from a focus on capacity and performance to managing support for selected workloads, those with the greatest business value, based on SLAs. To this end, NexGen’s systems leverage a purpose-built architecture, policy-based data management, QoS and real-time performance monitoring to enable the IT staff to effectively deal with that critical 5% among the overall data explosion they’re faced with. And, with an improved caching architecture to increase overall flash utilization, they can support larger data sets and more workloads.
Prioritized Active Cache (PAC) – a Smarter Cache
NexGen has split the cache into two active virtual pools that can be independently used by the system to deliver performance that’s aligned to the QoS priorities. This means more flash capacity is available for read caching, while maintaining a different flash area for mirrored write caching (for high availability) to handle the ingest of data into flash. The caching algorithms are self-tuning and dynamic, based on the workload presented by the host(s). All that’s required from a configuration standpoint is that the administrator associate their volumes with the appropriate storage QoS performance policy, and data is prioritized into flash from there.
According to NexGen, this process improves flash utilization by 150% and delivers a 3x performance increase over competitors’ systems; specifically, those where the flash is statically configured and/or used exclusively as a read or write cache/tier. PAC is shipping with new systems today and is available for current customers via a software upgrade, as well.
The combination of greater flash efficiency and QoS prioritization means a hybrid system can support the same workloads but consume less flash capacity. Stated another way, NexGen’s system can support considerably more workloads with the same amount of flash; 2-½ times more than other hybrids and 10 times more than all-flash systems based on the company’s product data.
We see the same shift in flash-based storage system design that we saw in server virtualization. It’s not about the technology providing the resources consumed by the entire IT environment, it’s about the most important workloads using those resources – the 5% that really drives the business. NexGen’s focus on understanding the business value of data, aligning that value with the cost to store and access data, while increasing flash efficiency, seems to be on target with this approach and worth a look.