What is a Second-Generation Cloud Strategy?

Adoption of cloud services is following a trend similar to on-premises data center infrastructure. Many data centers begin using a single infrastructure vendor, compromising flexibility for the assumed simplicity of a single solution. As data centers evolve, they gravitate toward best of breed infrastructure, with multiple vendors’ infrastructure all working together based on standards.

In the cloud market, providers including Amazon Web Services (AWS), Google, and Microsoft all have soup-to-nuts offerings that include hundreds of different cloud services and IT is drawn to their alleged simplicity. One vendor can’t be the best at everything, though, resulting in some areas of compromise. At the same time, these providers have closed, incompatible application programming interfaces (APIs), and they charge expensive egress fees for taking data out of their service – creating cloud lock in.

The good news for IT professionals is that a “second-generation,” multi-cloud and best-of-breed approach is emerging. Providers with specialized unique capabilities around compute and storage, as well as more aggressive pricing, have entered the market. Customers now can choose the best cloud storage vendor, the best cloud compute vendor, the best analytics vendor, and so forth – and all of these services can work together. The result for customers is a lower-cost infrastructure without the need to compromise on key capabilities, such as performance or availability.

Why Does a Multi-Cloud Approach Yield Best-of-Breed Choice?

The resource flexibility that is inherent in a multi-cloud approach positions IT to optimize performance as well as other key functionalities. From the standpoint of cost, the organization is free to select whichever cloud service provider offers the best price for the particular services and capabilities that they need. The ability to select and change providers easily is important to have because cloud providers change prices and add new features frequently. With a multi-cloud approach, the organization can purchase from multiple cloud providers to obtain the best price-to-performance ratio, as well as to obtain the lowest price for required capabilities – and all on a case-by-case basis. Flexibility in the cloud is especially valuable since unneeded resources can literally be turned off on demand. On-premises flexibility is limited since in most cases the hardware and software are purchased outright and can’t be returned.

One of the key advantages of multi-cloud architectures is that they allow performance levels to be balanced on a regional or on an application-specific basis. This improves quality of service and at the same time helps the customer to avoid over-paying for levels of performance that an application might not need. For example, colocation providers can typically provide faster and more predictable performance through lower latency and fewer network traffic bottlenecks. This is in large part because they have fewer customers, compared to general-purpose public cloud service providers. So, they may be a strong match for mission-critical application hosting but might not be required for less critical applications.

Multi-cloud environments also reduce risk and improve reliability because the application can be migrated from one cloud provider to another in the event of a service outage. This helps to support continuous availability. A multi-cloud approach can also help to ensure compliance, because a mixture of clouds may be utilized to meet varying region or industry-specific regulations.

What Creates Cloud Provider Lock In?

The general-purpose approach to the cloud has created lock in for two major reasons: egress fees and incompatible APIs.

Egress fees are the fees that major, general-purpose cloud providers charge their customers for migrating their data to another cloud provider’s service or to an on-premises data center. The customer is typically charged per gigabyte. It is very difficult for the customer to predict upfront what these fees will be, in large part because it is very difficult (arguably impossible) to anticipate future data access and migration requirements.

To take a specific example, AWS currently charges $23 per month to store one terabyte (TB) of data in its S3 service. So, storing 50 TB of data in Amazon S3 costs $1,150 per month. What’s more, if the customer accesses 20% of that data (10TB) in a month, the customer would be charged more than $900 in additional egress fees (AWS charges $90 per TB for transferring data out of S3, with the first TB being free, per month). This means that accessing 20% of the data stored in AWS per month approximately doubles the customer’s storage cost. Meanwhile, PUT, GET, and other API request charges add additional unpredictable monthly costs. These requests vary depending on the application and the size of the customer’s objects.

Egress and API request fees quickly become expensive because, as previously discussed, it is necessary to be able to migrate applications between a variety of cloud and on-premises infrastructure resources. For example, a user might want to move an application that was developed in AWS to Microsoft’s Azure service. In addition to such a migration being expensive, the application must also be re-engineered because the APIs are incompatible. The egress charges for moving data out of one cloud to another and the lack of a standard APIs means that applications are rarely moved – creating cloud lock-in. Customers might not be able to utilize and application that better suits their needs due to the expense and difficulty of migration. The pain points associated with cloud lock-in are giving way to a more open, standards-based approach, just as hardware and operating systems have broad interoperability.

How to Create a Best-of-Breed, Multi-Cloud Infrastructure

Arguably the most important thing for IT professionals to keep in mind as they are architecting their multi-cloud infrastructure is that, whereas compute cycles are typically “lightweight” in terms of being easy and quick to spin up, data is more difficult to move. Data tends to have “gravity,” largely due to the time and expense that are typically required to migrate it. Against this backdrop, addressing the “data gravity” issue is arguably the most important consideration that IT professionals should consider when designing their multi-cloud architecture.

One of that challenges that IT professionals face as they architect their multi-cloud strategy is that the general-purpose public cloud providers offer a number of various “tiers” of storage services. To help IT professionals decide what data should be stored on what tier, an industry of consultants has emerged. Many of these consultants are compensated on the basis of a percentage of the savings that they create for their clients by optimizing data tiering. Some vendors have employed artificial intelligence-like processes to help as well. All of this testifies to the fact that optimizing data placement has become so complicated that even an experienced IT professional often can’t figure out the best solution.

Some second-generation vendors have taken the approach of eliminating or drastically reducing the number of storage tiers, assuming that cost savings from data tiering are not worth the trouble. This one-size-fits-all approach treats storage more like electricity or bandwidth. If the customer can have one tier that is fast enough for nearly any application and that is also among the cheapest, there is no longer a need for multiple tiers, except for some extreme use cases.

In the spirit of simplifying the procurement of cloud data storage, some second-generation vendors are also eliminating egress fees and API call fees. With this approach, the customer is only charged for the actual amount of storage capacity consumed. This approach results in a lower and a much more predictable monthly bill for the customer. Eventually, cloud storage services will become a commodity, and vendors will be compared on storage price, speed, durability, simplicity, and brand reputation. Purchasing cloud storage capacity will become more like buying network bandwidth or data center rack space.

In order to function effectively, a multi-cloud architecture also requires close integration and flexible connectivity between cloud service providers, cloud compute providers, managed service providers, and the providers of the technology stack. This is so that applications can work properly and also migrate seamlessly across clouds.

In summary, “flattening” the cloud storage hierarchies as much as possible to a centralized tier, avoiding egress and API request fees, and a flexible approach to industry alliances and integrations is foundational to making a multi-cloud architecture work.

Who Is Wasabi Technologies?

Wasabi Technologies exclusively provides object-based cloud storage services. What makes Wasabi unique is that it designed its storage architecture as well as its pricing model to be able to deliver both low cost and high-performance cloud storage services. The company calls its platform “tier free,” meaning that it is cost-effective enough to serve as an archive and long-term retention repository, but also fast enough for use cases such as disaster recovery.

From an architecture perspective, Wasabi chose to develop its own file system as opposed to using an open source file system, so that it could enable reads and writes that are faster than the typical object store. This performance is achieved largely by using a front-end flash tier to organize writes and to store all metadata. From a pricing perspective, Wasabi chose not to charge customers for egress fees or API requests, in order to make its pricing more affordable and more predictable. As another measure to keep its prices low, Wasabi applies highly efficient use (in terms of write management and capacity utilization) of Shingled Magnetic Recording (SMR) hard disk drives.

Also relevant to the multi-cloud conversation is Wasabi’s Direct Connect service. Direct Connect provides a direct, low-latency connection from the customer’s private cloud environment (whether on-premises or in a colocation facility) directly to Wasabi. Customers pay a fixed monthly price for data ingress and egress. They are charged simply based on the port speed rate that they select (1 Gbps or 10 Gbps). The result for customers is fast and secure access to their data that is stored in Wasabi’s cloud, without expensive and unpredictable billing.

Conclusion

As we enter a new phase of cloud market maturity, it is clear that expensive egress fees and incompatible APIs are creating a state of vendor lock-in that inhibits customers from simultaneously optimizing performance, maximizing uptime and controlling their costs. A multi-cloud approach is now possible and can help, but to do so it needs to be executed correctly. The ideal foundation of a successful multi-cloud architecture is a centralized storage pool that can provide low latency performance, and at the same time be cost-effective enough to be used for long-term data retention. From there, customers have full flexibility to work with various other vendors in areas such as application development, data protection and compute infrastructure that best meet their requirements on an application-specific basis.

Sponsored by Wasabi Technologies

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,079 views
%d bloggers like this: