Blog Archives

StorCentric Acquires Retrospect – Comprehensive Primary through Secondary Storage Capabilities

The lines between primary and secondary storage infrastructures are blurring. Today’s typical backup and disaster recovery workloads require faster performance while production workloads demand growing amounts of capacity. IT requires as consolidated a storage infrastructure as possible for simplicity. At

Tagged with: , , , , , , , , , , , , , ,
Posted in Briefing Note

The ROI of Extreme Performance

In our recent webinar, “Flash Storage – Deciding Between High Performance and EXTREME Performance”, Storage Switzerland and Violin Systems discussed the use cases that demand extreme performance over high performance. Extreme performance is for applications that can benefit from consistent

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Does the Storage Media Matter Anymore?

It used to be that the storage media was the determining factor of an application’s performance. However, with non-volatile memory express (NVMe) solid-state drives (SSDs) now having reached price parity with Serial-Attached SCSI (SAS) SSDs, the potential to achieve hundreds

Tagged with: , , , , , , , , , , , ,
Posted in Blog

The State of Server Virtualization: Summer 2019

The “software-defined data center” (SDDC) is hailed by many as the data center architecture of the future – promising to bring new levels of hardware utilization and a simplified, public cloud-like user experience on-premises. Previously, Storage Switzerland detailed the key

Tagged with: , , , , , , , , , , ,
Posted in Blog

DriveScale Composable Infrastructure: Elastic and Efficient Resources for Modern Workloads

Modern workloads such as Hadoop, Kafka and machine learning are demanding in terms of the volume of data that must be processed, the speed at which that data much be processed, and the fact that their capacity and performance requirements

Tagged with: , , , , , , , , , , , ,
Posted in Blog

Is NVMe Enough for Efficient Hyperscale Data Centers?

Hyperscale architectures typically sacrifice resource efficiency for performance by using direct attached storage instead of a shared storage solution. That lost efficiency though, means the organization is spending money on excess compute, graphics processing units (GPUs) and storage capacity that

Tagged with: , , , , , , , , , ,
Posted in Blog

The Problems with Hyperscale Storage

Direct attached storage (DAS) is the default storage “infrastructure” for data intensive workloads like Elastic, Hadoop, Kafka and Splunk. The problem, as we detailed in the last blog, is using DAS creates a brittle, siloed environment. Compute nodes can’t be

Tagged with: , , , , , , , ,
Posted in Blog

The Problems that Scale-Out Architectures Create

Data intensive workloads like Elastic, Hadoop, Kafka and Splunk, are unpredictable, making it very difficult to design flexible storage architectures to support them. In most cases, scale-out architectures utilize direct attached storage (DAS). While DAS delivers excellent performance to the

Tagged with: , , , , , , , , , , , , ,
Posted in Blog

Validating the Developing Standards of NVMe – University of New Hampshire InterOperability Laboratory Briefing Note

The non-volatile memory express (NVMe) storage controller interface continues extending into the data center, on the back of growing requirements for new levels of application performance acceleration. NVMe is typically deployed as direct-attach storage via a Peripheral Component Interconnect Express

Tagged with: , , , , , ,
Posted in Briefing Note

Consumption-Based IT with No Minimum Commitment – Lenovo TruScale Briefing Note

The modern data center infrastructure market is characterized by the need for improved agility and responsiveness, and to make more strategic use of IT budgets. Cloud computing has introduced the concept of subscription-based IT service delivery, but it lacks on-premises

Tagged with: , , , , , , , , , , ,
Posted in Briefing Note