Organizations are experiencing unprecedented growth in their virtual environments and as they grow, maintaining a consistent level of performance becomes a major problem. Most enterprises combat this challenge by over-provisioning resources; they buy too much compute and make massive investments in all-flash storage systems. And while the performance challenge may be solved IT is left having to justify an out of control budget and are left with no idea as to when performance problems will return.
VMware and other hypervisors provide plenty of methods to manage the allocation of compute resources but very limited capabilities at the storage system level, and that’s why there is such a rise in all-flash array sales. It’s like cutting a watermelon with a sledgehammer, effective but not efficient.
Getting What You Need
All-flash systems have their place, but IT needs to ensure it buys only the amount of all-flash it needs and then can accurately predict when it will need more. IT needs storage systems that will provide quality of service (QoS) at a virtual machine (VM) level of granularity. That way each VM gets its own swim lane.
VM granularity is key not only to ensuring predictable performance per application but also in analyzing and predicting when storage resources will need to be expanded or have their performance enhanced. This analysis allows IT to place more VMs in a smaller footprint with the comfort of knowing that it will have plenty of advanced notice to any resource shortages that may arise.
A New Take on Storage
VM level awareness is the foundation of the Tintri solution. It is a new take on storage and is matched to the reality of the modern day data center where virtualization rules the day. Traditional storage architectures use decades-old designs that have restricted visibility into the actual VMs that are running on it. Tintri knows exactly what each VM is doing and provides concise, easy to understand analytics for consuming that information.
Recently Tintri updated its offering to include a new cloud connector, the addition of compute analytics and an accelerated VM Scale-out Storage Live Migration functionality. The cloud connector provides data protection, archive, and disaster recovery of a Tintri customer’s mission-critical, on-premises applications by providing seamless integration into Amazon and IBM cloud resources.
The addition of compute analytics completes the Tintri Analytics puzzle by not only providing details on storage IO utilization but also on how the compute environment is performing. Tintri Analytics will provide compute trending based on historical data and will help its customers predict how much storage and compute they need based on current growth trends and what-if simulations of new projects. The result is an end-to-end view of resource consumption built right into the storage system.
The Tintri scale-out cluster is built from a loosely coupled cluster of Tintri storage nodes. The loose cluster design provides increased per node flexibility. The cluster can now support up to 64 Tintri storage systems; all managed from a single pane of glass via the Tintri Global Center management console. Tintri is adding multi-tenancy capabilities like hard quotas for logical capacity per tenant, tenant separation, encryption and per VM analytics chargeback.
The VM Scale-out recently added the offloading of storage live migration for both vSphere and Hyper-V. Tintri claims that migrating running applications between arrays is 10X-30X faster than on other storage systems. The result is migrations are reduced from hours to minutes with no compute load impact on physical hosts.
Tintri long ago established itself as a leader in the VM specific storage market. It has continued to expand its capabilities to provide analytics that encompass more of the environment and has started to extend its functionality into the cloud. The result is Tintri customers purchase the exact resources they need to maintain today’s services levels while having the insight to know when the future demand will require more.