Blog Archives

Is NVMe Enough for Efficient Hyperscale Data Centers?

Hyperscale architectures typically sacrifice resource efficiency for performance by using direct attached storage instead of a shared storage solution. That lost efficiency though, means the organization is spending money on excess compute, graphics processing units (GPUs) and storage capacity that

Tagged with: , , , , , , , , , ,
Posted in Blog

Infinidat Introduces Elastic Data Fabric

More Efficient NVMe-oF at Enterprise Scale For large enterprises, data stores continue to grow into the petabytes (PB), and increasingly need to operate across on-premises infrastructure and cloud services in order to balance cost, control and performance needs. Meanwhile, the

Tagged with: , , , , , , , , , , ,
Posted in Briefing Note

What is High versus Extreme Performance?

While some applications in the data center require extreme performance, high performance is now a default requirement for all production applications. How performance is measured varies by application (for example, one application might require very high throughput while others might

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Is Object Storage Really the Future of Unstructured Data Storage?

Simply put, unstructured data is breaking traditional network-attached storage (NAS) architectures. The scale-up nature of traditional NAS solutions renders the storage controller a bottleneck in being able to handle the intensive metadata operations that are associated with unstructured files, forcing

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Object Storage: More than Just Archive?

Object storage is typically almost exclusively associated with archive use cases. Long-term retention use cases are important when it comes to ensuring compliance, and object storage indeed functions effectively as a low-cost and searchable data repository. At the same time,

Tagged with: , , , , , , , , , ,
Posted in Blog

The Problems with Hyperscale Storage

Direct attached storage (DAS) is the default storage “infrastructure” for data intensive workloads like Elastic, Hadoop, Kafka and TensorFlow. The problem, as we detailed in the last blog, is using DAS creates a brittle, siloed environment. Compute nodes can’t be

Tagged with: , , , , , , , ,
Posted in Blog

Why is a New Generation of HCI Needed for the Hybrid Cloud?

For the vast majority of enterprises, the question is not whether to go all-in on the public cloud, or to keep all workloads on-premises. Using both in a hybrid cloud architecture is required to meet applications’ wide-ranging cost, control and

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

15 Minute Webinar: NVMe Readiness Assessment

Most All-Flash Arrays were bought in the last few years and have not come anywhere close to “end of life,” yet most vendors are now shipping NVMe All-Flash Arrays which offer better performance. As enticing as these new systems might

Tagged with: , , , , , , , , , ,
Posted in Webinar

How to Maximize Resources and Efficiency for Artificial Intelligence, Machine Learning and Edge Compute – Liqid Briefing Note

Composable infrastructure, whereby resources are disaggregated and may be re-composed on the fly, stands to serve a number of key, future-forward IT infrastructure requirements. Resources may be added and then returned for use by a different application on the fly.

Tagged with: , , , , , , , ,
Posted in Briefing Note

A No Compromise Approach to Software Defined Storage – DataCore Software Briefing Note

Software Defined Storage (SDS) was supposed to displace legacy hardware storage solutions that locked customers into a particular vendor. Years after its introduction SDS continues to struggle to gain critical mass within data centers. Today most data centers, especially at

Tagged with: , , , , , ,
Posted in Briefing Note