At its re:Invent conference in Las Vegas, AWS showed how it plans to optimize the cloud enterprise journey. With intelligent tiering, predictive scaling and inexpensive new tiers of capability, AWS is actively looking to give its customers ongoing savings.


The 451 Take

Few companies have gone as far as AWS in showing a commitment to reducing spend. By the company's own reckoning, it has lost half a billion dollars in revenue as a result of its Trusted Advisor. But the company is playing the long game; it knows that short-term losses in revenue will equate to greater trust, better relationships and greater revenue growth in the long term. The company has now dipped its toe into active waste management and cost optimization, the two pillars of what we call the Brave New World, where every cent of cloud spend is adding value. We think it will continue to invest in resolving complexity and reducing its customers' bills, but there's a lot to do and it has only scratched the surface. AWS might provide the tools, but expert partner assistance will be needed to fully take advantage of them.


Since we introduced the Cloud Transformation Journey at the start of this year, it's become increasingly apparent with the enterprises we've talked to that complexity isn't a minor issue – it is actively impacting their cloud experience. Cloud starts off as a big driver of cost savings, but resource sprawl and unexpected costs lead to runaway spend (as shown in the figure below). In fact, we've shown that sprawl, and therefore complexity, is inevitable in cloud-native applications, and there is only one solution to prevent the threat of fragility: constant optimization. As such, there lies a big opportunity for service providers to resolve this complexity through optimization.

At this year's AWS re:Invent conference in Las Vegas, the company was keen to show that it is taking steps to resolve the challenge. In fact, at each step of the Cloud Transformation Journey, AWS now has a product or service aimed at squeezing enterprise costs. In this report, we examine a few of the announcements at re:Invent that reflect how the hyperscaler is providing capability to allow enterprises to make savings. In the figure below, yellow blocks show where AWS made new announcements pertinent to steps along the journey (lighter yellow shows existing products).

 

Cost optimization

–Intelligent Tiering

AWS has been offering its Trusted Advisor service since 2013, giving recommendations to users on how to reduce costs through rightsizing, termination of orphaned resources, the use of Reserved Instances and other actions. However, Trusted Advisor provides recommendations only, and it is up to the user to make the changes and perform the optimization. S3 Intelligent Tiering is AWS's first foray into performing optimization on the user's behalf. In this new class of object storage, AWS automatically moves objects between two access tiers – one optimized for frequent access and another for infrequent access. These tiers are charged the same as S3 Standard storage and S3-Standard-Infrequent-Access, respectively. The service moves objects that haven't been accessed for 30 days to the infrequent tier; if the object is then accessed, it is moved back to the frequent tier. It isn't rocket science, nor is it technically challenging for AWS to productize Intelligent Tiering as a service, but it is a statement of AWS's intent – it is essentially making it easier for its customers to spend less, although it does charge a fee of $0.0025 per 1,000 objects.

Intelligent Tiering removes stress from the user in having to manually optimize, thus giving ongoing value in the form of reduced costs and an easier life. In our journey, it represents a step into the Brave New World, where costs are optimized automatically on an ongoing basis, such that the user knows those resources are as cheap as can reasonably be achieved. We expect AWS's capability in this area to become more technologically advanced (using machine learning and even AI) and broader across a range of cloud services. AWS claims its Trusted Advisor has already realized $500m in customer savings (AWS lost revenue) since launch. We expect Intelligent Tiering on S3 to be only the beginning of this type of optimization capability.

–Predictive Scaling

Since the Cloud Price Index started tracking the 500,000 SKUs offered by AWS, Google and Microsoft, we have mooted the need for service providers (and enterprises) to use machine learning to optimize performance. AWS's new Predictive Scaling service does exactly this. To date, AWS has provided a range of tools to allow applications to scale, most notably CloudWatch metrics (which tracks CPU utilization), Auto Scaling (which adds/removes resources to an application as a result of metric changes) and Elastic Load Balancing (which distributes traffic across resources). However, this process always relied on a trigger, and – even with a boot time of just a few minutes – there was always a lag between resource demand and new resource provisioning; a lag that could cost a retailer a lot of revenue. Many enterprises we dealt with provisioned extra capacity in advance to cope with this lag, and even scaled to thousands of resources prior to an event (such as a sale) to cope with these spikes in demand. Unless the retailer can predict with perfect accuracy, the result of such pre-purchasing is either wasted resources or performance degradation.

Predictive Scaling uses machine learning algorithms to analyze the scaling behavior of EC2 instances. Based on the instance's past behavior (a minimum of 24 hours' worth) plus data on other workload behavior, the service creates a forecast for the next 48 hours once every 24 hours. Users can set the scaling plan to optimize based on availability, cost, a balance of both or a custom policy, and forecasting can be based on CPU or network performance, or a different CloudWatch metric. A warm-up buffer can be configured to give the instances time to boot prior to a forecast requirement. Such predictive capability puts Waste Management in our Brave New World phase by reducing the likelihood of waste through manual speculative provisioning, while ensuring performance by provisioning resources in time for the demand.

 

Resource governance

–Resource Access Manager

Before waste management or optimization, enterprises need to control who can spin up resources, and to what degree. Resource governance is one of the steps in the War and Peace phase, where CIOs take action to control spiraling costs.

AWS has made strides with preventing runaway costs through a number of services released over the past few years. Service Catalog allows administrators to define policies for provisioning of resources, and Cost Explorer and Budgets allow financial administrators to understand costs and lock-down budgets and approvals. At re:Invent, AWS announced Resource Access Manager, which allows multiple entities to share resources. From an economic standpoint, this has the attractive feature that resources don't have to be duplicated across entities, thereby reducing costs.


Price model optimization

–Product Tiering

We recently predicted a new trend in cloud for tiered services, where service providers would enable enterprises to save money by creating more basic versions of their standard capability. Enterprises would mix and match different versions of the same product line to match differing workload requirements on criteria such as performance or recovery time. Part of the War and Peace phase of our journey, Product Tiering is a form of price model optimization, letting users choose the best model to suit their specific needs.

AWS fully embraced this concept at re:Invent this year, launching new inexpensive versions of S3 Glacier archiving and EFS file storage. S3 Glacier Deep-Archive increases the time to recover data from standard Glacier from four to 12 hours, but will be priced from $0.00099 per GB-month, roughly a 75% saving on standard Glacier. It will be available in 2019 from all regions, and AWS claims it is priced similarly to tape storage.

Similarly, AWS launched an Infrequent Access (IA) class of its Elastic File System, with AWS claiming an 85% saving on costs compared with the standard tier. With Lifecycle Management enabled, EFS moves files to the IA tier after 30 days of not being accessed.

–DynamoDB On-Demand/Elastic Inference

The company has also made a sensible decision to create an on-demand pricing model for its NoSQL database, DynamoDB, allowing users to build tables without prior knowledge of read or write throughput. This is another option for enterprises that want to optimize their cost models, and gives developers the ability to experiment or build scalable applications without having to make a commitment. AWS will retain its provisioned capacity mode, best used for predictable application traffic. Data storage costs are the same on both modes, but it is challenging to compare the prices for read and write capacity, especially considering the provisioned capacity mode also allows reserved capacity to be purchased. We will delve deeper into this comparison in 2019.

In a similar vein, Elastic Inference is essentially a GPU capability that can be 'attached' to virtual machines, similarly to Google's Elastic GPU. Again, this gives users another model to consume GPU – if they want to consume a lot of GPU, they can use a GPU instance. If not, they can consume what they want for how long they want through Elastic Inference.

 

Hybrid Cloud/A Tale of Two Options

–Outposts

As we've shown in the Cloud Price Index, private cloud can be cheaper than public cloud, if operated at a high enough level of utilization and labor efficiency. In fact, according to 451 Research's Voice of the Enterprise: Cloud, Hosting & Managed Services, Organizational Dynamics 2018 report, cost is the reason why 34% of enterprises are moving some data back to private cloud. AWS announced Outposts at the event, an on-premises version of its own hardware stack containing a VMware- or AWS-based cloud system. Details are sketchy at the moment, but we expect AWS's pricing to be transparent and public, but charged at a premium. Our hunch, based on CPI data, is about a 20-50% premium over public cloud. Snowball Edge provides a mini private cloud capability for edge workloads, and AWS announced new sizes at re:Invent this year.

Competition


Microsoft is runner-up to AWS, focusing primarily on its huge incumbent user base of Windows users, and aiding a unified cloud experience across public and private clouds. Microsoft has recently added a number of AWS-style pricing models to its roster, including reserved instances and a form of spot instances. It acquired Cloudyn to aid cost management and reporting, but we've anecdotally heard that it isn't as well integrated as one might expect. We've also heard that Azure Stack, Microsoft's on-prem private cloud play, is fairly expensive. AWS's Outposts is a threat here, and we would bet that AWS is more transparent about its pricing than Microsoft.

IBM is a big player here, too, but the user experience across its range of IaaS, PaaS and SaaS – plus its wide array of hardware and software – can be a confusing melting pot of technologies. IBM's acquisition of Red Hat might aid the unification of the hybrid story, as should its recent announcements to support multi-cloud deployments.

Google continues to bolster its credentials with enterprise case studies and, although relatively small today, it has a play in data analytics, where Google's core business is based. It has attempted to undermine AWS's pricing complexity through innovations such as its sustained-use pricing model, which rewards users with a discount for ongoing consumption. Google, too, has recently been more open-minded with hybrid and private cloud deployments, and launched Kubernetes On-Prem earlier this year.

Rackspace still plays in cloud, but now focuses on managing third-party infrastructure instead of its own public cloud. Oracle has been attacking AWS on price-performance, even driving cabs around Las Vegas during the event blazoned with 'Save 50% by moving to Oracle.' Other providers include CenturyLink, Fujitsu, NTT, Virtustream, DigitalOcean, Huawei, Alibaba and Tencent.
Owen Rogers
Research Director, Digital Economics Unit

As Research Director, Owen Rogers leads the firm's Digital Economics Unit, which serves to help customers understand the economics behind digital and cloud technologies so they can make informed choices when costing and pricing their own products and services, as well as those from their vendors, suppliers and competitors, and architected the Cloud Price Index.

Al Sadowski
Research Vice President - Voice of the Service Providers

Al is responsible for 451 Research’s Voice of the Service Provider offering. He focuses on tracking and analyzing service provider adoption of emerging infrastructure, spanning compute, storage, networking and software-defined infrastructure.
Jean Atelsek
Analyst, Cloud Price Index

Jean Atelsek is an analyst for 451 Research’s Digital Economics Unit, focusing on cloud pricing in the US and Europe. Prior to joining 451 Research, she was an editor at Ovum, spiffing up reports, forecasts and data tools covering telecoms and service providers, fixed and wireless networks, and consumer technology among other topics. 

Want to read more? Request a trial now.