Enabling Intel Optane for Modern Applications – MemVerge Briefing Note

Workloads, from a performance perspective, are polarizing. Many workloads have all their performance demands met by the typical all-flash array. At the other pole however is a group of applications that need more performance that what even NVMe based storage systems can provide. While these applications, at least for now, are few in number, they are often the applications that organizations are counting on to drive them into the next phase of innovation.

Today’s new age of data ushered in by artificial intelligence, big data and Internet of Things (IoT), demands that the storage infrastructure provide volume, variety and velocity. The problem is that most storage IO is too slow, especially for high-file count environments as well as ad hoc queries. Upgrades to the network infrastructure like NVMe-oF are not enough as these workloads are often bottlenecked at the peripheral bus, which also means internal storage can’t meet the performance demands. The workloads become more memory centric but the capacity of available memory per server or node becomes an issue.

Enter 3D XPoint

Intel in collaboration with Micron is close to resolving the issue, at least from a hardware perspective, with the introduction of 3D XPoint Persistent Memory. Intel claims the technology is 1000X faster than flash NAND, has 1000X the endurance of flash NAND and is 10X denser than conventional memory. The first iteration of this hardware is Intel Optane. Optane first appeared as a storage only product in an SSD form factor in 2017.

Intel recently announced general availability of a persistent memory version of Optane in a DIMM form factor. The product will operate in one of three modes depending on a server’s BIOS setting. The first mode, Volatile Memory Mode, provides more capacity than DRAM with 3TB per socket in 2019 and 6TB per socket in 2020. The second mode, Block Storage Mode, enables the device to appear as block storage with lower latency than NVMe SSDs. The purpose of these two modes is to provide backwards compatibility and enable applications and storage systems to use Optane without change.

The third Optane mode is App Direct Mode. It is a new persistent memory programming model that requires changing applications or storage system software; in some cases significantly. App Direct Mode takes full advantage of the 3D XPoint technology with even lower latency and faster response times. The problem is the changes required to exploit this mode is outside of the capabilities of most enterprises and even beyond the capabilities of many storage software experts.

Introducing MemVerge

MemVerge is a new take on hyper-converged systems and is designed from the ground up for persistent memory. Its Distributed Memory Objects is a distributed software system supporting both memory and storage APIs leveraging Intel Optane DIMMs. It supports the App Direct Mode for maximum performance and minimal latency, but doesn’t force the organization to modify its applications. MemVerge manages and provisions Optane as either memory or storage. It can even expand memory beyond the memory capacity of a single node. All Storage IO, as well as memory IO, is contained on the memory bus, unless the workload needs additional memory from other nodes.

The solution is delivered as an appliance using Intel Cascade-Lake processors with persistent memory installed. Each node in the MemVerge cluster supports up to 6TB of system memory and 360TBs of physical data capacity. The MemVerge software automatically caches from Memory partitions to Optane Storage partitions and eventually to flash storage based on data access rates. The appliance also has complete integration with applications common in this space, including; Spark, Presto, TensorFlow, ElastiSearch, Splunk and My-SQL. The integrations provide a one-click AppStore-like deployment of distributed data-driven applications powered by Docker and Kubernetes.

StorageSwiss Take

Many organizations today have dedicated clusters serving specific modern application workloads, but these clusters are often siloed, and can’t support multiple applications. In many cases, they also leave the organization wanting more performance than what is presently possible. MemVerge delivers the raw performance these applications need, has the performance headroom to support multiple data intensive workloads simultaneously and greatly simplifies the deployment of these applications.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
%d bloggers like this: