The ascendancy of Amazon Web Services Inc. under the leadership of Andy Jassy was marked by a tsunami of data and corresponding cloud services to leverage data. Those services mainly came in the form of primitives – basic building blocks that were used by developers to create more sophisticated capabilities.
AWS in the 2020s, led by Chief Executive Adam Selipsky, will be marked by four high-level trends in our view: 1) a rush of data that will dwarf anything previously seen; 2) a doubling or tripling down on the basic elements of cloud – compute, storage, database, security and the like; 3) a greater emphasis on end-to-end integration of AWS services to simplify and accelerate customer adoption of cloud; and 4) significantly deeper business integration of cloud, beyond information technology, as an underlying element of organizational operations.
In this Breaking Analysis, we extract and analyze nuggets from John Furrier’s annual sitdown with the CEO of AWS. We’ll share data from Enterprise Technology Research and other sources to set the context for the market and competition in cloud, and we’ll give you our glimpse of what to expect at re:Invent 2022.
First, a quick update on the hyperscale revenue forecast
Before we get into the core of our analysis, Alibaba this past week announced earnings and we’ve updated our third-quarter hyperscale computing forecast for the year, as seen above.
We won’t spend much time on this, but suffice to say Alibaba’s cloud business is suffering from the macro trend but is seeing a more substantial slowdown than its peers. China headwinds combined with a restructuring of its cloud business has led to significantly slower growth for Alibaba. This puts our year-end estimates for 2022 revenue at $161 billion for all four hyperscalers combined. This still represents a healthy 34% growth, with AWS expected to surpass $80 billion in revenue.
Optimizing cloud spend is a durable trend
On a related note, one of the big themes in cloud that we’ve been reporting is how customers are reassessing their cloud spend. Below is a graphic we pulled from AWS’ website that shows the various pricing plans at a high level. As you know, they get much more granular.
Basically there are four levels AWS offers. On-demand – i.e., pay by the drink; spot instances – i.e., right place, right time I can use that extra capacity; reserved instances, or RIs, where I pay up front to get a discount; and optimized savings plans where customers commit to a one- or three-year term.
You’ll notice that we labeled the choices in a different order than AWS presents on its website and that’s because we believe the natural progression for customers is to start out on-demand, maybe experiment with spot instances, move to RIs when the cloud bill becomes too onerous and, if you’re large enough, lock in for one or three years.
We believe on-demand accounts for the majority of AWS customer spending. But those on-demand customers are also at-risk customers. Sure, there are switching costs such as egress and learning curve, but many customers have multiple clouds and if you’re not married to AWS with a longer-term commitment, there’s less friction to switch.
AWS presents the most attractive plan from a financial perspective, second after on-demand, and it’s also the plan that requires the greatest commitment.
The trend toward subscription pricing models varies by sector
In fairness to AWS, it’s true that there’s a trend toward subscription-based pricing. But it’s less pronounced in infrastructure as a service than software as a service. This chart below is from an ETR drill-down survey (N=300). Pay attention to the bars on the right. The pink is subscription-based and the light blue is consumption-based. And you can see there’s a steady trend toward subscription.
We’ll dig into this in a later Breaking Analysis episode, but we’ll share now that when it comes to IaaS and platform as a service, 44% of customers either prefer or are required to use on-demand pricing whereas around 40% of customers say they either prefer or are required to use subscription pricing. The further you move up the stack, the more prominent subscription pricing becomes — often with 60% or more for software-based offerings requiring or preferring subscription. Notably cybersecurity tracks software likely because, as with software, you’re not shutting down your cyber protection on demand.
Putting data at the core of the business
As a preview to our expectations for re:Invent, we’ll make an an observation related to data.
In his 2018 book Seeing Digital, author David Moschella made the point that, whereas most companies organize data operations on the periphery of their business – almost as an add-on function – mega data companies such as Google LLC, Amazon.com Inc. and Facebook owner Meta Platforms Inc. have placed data at the core of their operations and apply machine intelligence to that foundation.
Why is this? The fact is it’s not easy to do what the internet giants have done.
Embedding machine learning and AI into apps and data is a long-term trend
This brings us to re:Invent 2022 and the future of cloud.
Machine learning and AI will increasingly be infused into applications. The data stack and applications stack are coming together as organizations build data apps and data products. Data expertise is moving from the domain of highly specialized individuals to everyday business people and we are just at the cusp of this trend.
We believe this will be a massive theme of not only re:Invent ‘22 but of cloud in the 2020s. The vision of data mesh will be realized this decade in our view.
Analyzing nuggets from Selipsky
Let’s now share a glimpse of Adam Selipsky’s thinking. We’ll extract and analyze statements from his sit-down with John Furrier. Each year, John has a one-on-one conversation with the AWS CEO. He has been doing this for 10 years and the outcome is a better understanding of the directional thinking of the leader of the No. 1 cloud platform. We’ll first share some direct quotes, run through them with some commentary and then bring in some ETR data to assess the market implications.
The cloud as a business transformer
IT in general and data are moving from departments into becoming intrinsic parts of how businesses function.
Our takeaway is the cloud generally and AWS specifically are moving toward deeper business integration.
In time we’ll stop talking about people who have the word [data] analyst in their title. Rather, we’ll have hundreds of millions of people who analyze data as part of their day-to-day job, most of whom will not have the word analyst anywhere in their title. We’re talking about graphic designers and pizza shop owners and product managers… and data scientists as well.
Our takeaway here is this speaks to democratizing data and fits with the data mesh trend.
Customers need to be able to take an end-to-end integrated view of their entire [data] journey from ingestion to storage to harmonizing the data to being able to query it, doing business intelligence and human-based analysis to being able to collaborate and share data… and we’ve been putting together a broad suite of tools from database to analytics to business intelligence to help customers with that.
Our takeaways: This last statement is true. However, under Jassy, there was not a lot of emphasis on an end-to-end integrated view. We believe it’s clear from these statements that Selipsky is getting direct customer feedback that demands attention to this capability.
Simplify, minimize data movement and inject intelligence into the data equation
If you have data in one place you shouldn’t have to move it every time you want to analyze that data. It would be much better if you could leave that data in place and avoid all the ETL, which has become a nasty three-letter word. More and more we’re building capabilities where you can query that data in place.
Our takeaway: This is a trend we see a lot in the marketplace. Oracle Corp. with MySQL Heatwave, the entire trend toward converged database, Snowflake Inc. and MongoDB Inc. extending their platforms into transactions and analytics respectively and so forth.
The other phenomenon is infusing machine learning into all those capabilities.
Our takeaway: Yes, the comments from the Moschella graphic above come into play here. Organizations must put data at the core and they’ll do so by deploying intelligent applications and infrastructure.
It’s not a data cloud. It’s not a separate cloud. It’s a series of broad but integrated capabilities to help you manage the end-to-end lifecycle of your data.
Our takeaway: This is a possible blind spot for AWS. In Furrier’s recent post on his interview with Selipsky, he cited commentary from Ali Ghodsi, CEO of Databricks Inc., and Dev Ittycheria, CEO of MongoDB. The notable delta is Selipsky sees these companies and the likes of Snowflake as independent software vendors . Both Ghodsi and Ittycheria see themselves as data platforms with a growing ecosystem of ISVs. In our view, they are building what we call superclouds on top of AWS and other clouds.
Closing the data sharing gaps
Selipsky’s next commentary that we’ll highlight focused on data governance.
Data governance is a huge issue. Really what customers need is to find the right balance for their organization between access to data and control. And if you provide too much access, then you’re nervous your data’s going to end up in places that it shouldn’t be, viewed by people who shouldn’t be viewing it. And you feel like you lack security around that data. And by the way, what happens then is people overreact and lock it down so that almost nobody can see it.
Our takeaway: Really well-put. But today this is a gap for AWS in our view. It’s not easy to share data in a safe way, especially outside your organization.
Data clean rooms: That’s a good idea
[Data] Clean room is a really, really interesting area. And I think there’s a lot of different industries in which clean rooms are applicable. I think that clean rooms are an interesting way of enabling multiple parties to share and collaborate on the data while completely respecting each party’s rights and their privacy mandate.
Our takeaway: This is also a gap within AWS today. Sure, AWS has RedShift Data Sharing. But that capability only works across clusters for users within the same AWS account. To share data with users on other AWS accounts, they needed to extract it from one system and load it into another – the very process Selipsky alluded to that he’s trying to eliminate.
We know Snowflake is well down this path. And Databricks with Delta Sharing is also on this curve. So AWS has to address data sharing further and demonstrate the capability as part of its end-to-end data integration strategy.
Is it possible AWS will pursue this vision with partners? That would be a powerful combination. But it’s more likely AWS develop its own data sharing capability and continue down a separate partner path as it has in the past.
AWS’ end-to-end data vision = Snowflake + Databricks + IaaS
Let’s bring in some ETR spending data to put some context around this issue with reference points in the form of AWS and its competitors and partners.
Above is a chart from ETR that shows Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the survey on the X axis. The table insert informs as to how the dots are positioned, Net Score by shared Ns in the survey. We’ve filtered the data on three big data segments: Analytics, database and ML/AI. The red dotted line at 40% indicates highly elevated customer spend.
As usual, Snowflake outperforms all players on the Y axis with a Net score of 63%. Databricks is right behind Snowflake with a robust 53% Net Score. All three big U.S. cloud players are above that line, with Microsoft Corp. and AWS dominating the X axis. And you see a number of other emerging data players such as Grafana Labs Inc. and Datadog Inc. MongoDB is solidly in the mix and then some more established players such as Splunk Inc., Tableau Software and Cisco Systems Inc. with its network observability offerings.
Then you see the really established players in data such as Informatica Inc., IBM Corp. and Oracle, all with strong presence but red in the momentum category.
We’ve put a red highlight around Databricks, Snowflake and AWS. Why? Well, we feel there’s no way AWS is going to hit the brakes on innovating at the base service level – what we called primitives earlier. Selipsky told Furrier as much in their sit-down. Our premise is that AWS will serve the technical user and data science community, the traditional domain of Databricks, and at the same time address the end-to-end integration, data sharing and business line requirements that Snowflake is positioned to serve. Meanwhile, both Snowflake and Databricks are pursuing a similar vision.
Notably, people often ask us, “How will Snowflake and Databricks compete with the likes of AWS?” And we answer, “By focusing on data exclusively and bringing multicloud to the table.” But perhaps the more interesting question is: How will AWS compete with the likes of the specialists – Snowflake and Databricks — and continue to offer best-of-breed services?
The answer is depicted above. AWS will serve both the technical developer and data science audience and through end-to-end integrations and future services that simplify the data journey for business users.
Ecosystem is AWS’ not-so-secret weapon
The nuance is in all the other dots in the data — and the hundreds or 100,000 that are not shown here, and that’s the AWS ecosystem. You see, AWS has earned the status of the No. 1 cloud platform that everyone wants to partner with. It has more than 100,000 partners in 150 countries, and that ecosystem, combined with these capabilities we’re discussing, while perhaps behind in areas such as data sharing and integrated governance, can wildly succeed by offering these capabilities and leveraging its ecosystem.
For their part, the Snowflakes, Databricks and Mongos of the world have to stay focused on the mission, build the best products and develop their own ecosystems to compete for and attract the mindshare of both developers and business users. And that’s why it’s so interesting to hear Selipsky basically say these partners are (just) ISVs and this data capability is not a separate cloud, it’s a set of integrated services.
Meanwhile, Snowflake, Databricks and MongoDB (and others) in our view are building superclouds on top of AWS, Azure and Google.
When great products meet great sales and marketing, good things can happen, so this will be really fun to watch what AWS announces in this area at re:Invent.
The correlation between serverless and containers
One other topic Selpisky talked about was the relationship between serverless and container adoption. And we have some data on this.
In the very earliest days of AWS Jeff used to say a lot if I were starting Amazon today I have built-in on top of AWS. I think the same thing is true here with Lambda. I think if Amazon were starting today, it’s a given they would build it on the cloud and I think with a lot of the applications that comprise Amazon’s consumer business, we would build those on our serverless capabilities. We still have plenty of capabilities and features and functionality we need to add to Lambda, and our various services, so that may not be true from the get go right now. But I think if you look at the hundreds of thousands of customers who are building on top of Lambda and lots of real applications… people are building real serious things on top of Lambda. And the pace of iteration, you’ll see there, will increase as well and I really believe that to be true over the next year or two.
Jassy made that statement five years ago and gave a strong hint that serverless was going to be a key developer platform going forward. And Selipsky referenced the correlation between serverless and containers so we wanted to test that with the ETR data.
Below is a screen grab of that same XY view across 1,300 respondents in the October ETR survey. We’ve isolated on the cloud computing sector and pulled the serverless options for the Big Three cloud players. You can see Google Cloud Functions, AWS Lambda and Microsoft Azure Functions. On the Y axis is Net Score which is a measure of spending momentum. On the X axis is presence in the data set.
Now remember, anything above that 40% mark is considered highly elevated. All three serverless platforms are well above that level.
So what we want to do is test the correlation between serverless adoption and momentum, which we’re showing below with container adoption. And the way we do that is we then filter the data on containers to see what happens. And you’ll see all three platforms move to the right. Now the N drops to 443 – i.e., the overlap between those aggressively adopting serverless and those adopting containers.
The first chart is serverless platforms alone and the second chart shows serverless customers that are also adopting containers aggressively. And you can see in the latter chart all three serverless platforms move to the right, signifying a high correlation between the two.
Serverless adoption generally
Correlation between serverless and container adoption
Why does this matter? Developer simplification and choice
To get a better understanding of what this means we reached out to friend and former CUBE co-host Stu Miniman. What he said was people generally used to think of virtual machines, containers and serverless as distinctly different architectures, but the lines are beginning to blur.
Serverless makes things simpler for developers who don’t want to worry about underlying infrastructure. As Selipsky and the data indicate, serverless and containers are coming together.
But as we discussed with Miniman, there’s a spectrum where on the left you have native cloud VMs, then in the middle AWS Fargate, and the rightmost anchor is Lambda.
Using containers without serverless
Traditionally in the cloud, if you wanted to use containers, developers would have to build a container image, select and deploy EC2 images, allocate a certain amount of memory, fence off the apps in a virtual machine, run the EC2 instances against the app and then pay for all the EC2 resources.
AWS Fargate – simpler but some control over the runtime
AWS Fargate lets you run containerized apps with less infrastructure management. So with Fargate, you’d build the container images, allocate your memory and compute resources, run the app and pay for the resources only when they’re used. Fargate lets you control the runtime environment while at the same time simplifying the infrastructure management. And you don’t have to worry about isolating the app and other stuff like choosing server types and patching. AWS does all that for you.
AWS Lambda: Don’t think about the underlying infrastructure
Then there’s Lambda. With Lambda you don’t have to worry about any of the underlying server infrastructure. You’re just running code as functions. So the developers spend their time worrying about the applications and the functions needed.
The point is there’s a movement, as we saw in the ETR data, toward simplifying the development environment and allowing the cloud vendor, AWS in this case, to do more of the underlying management. Some folks will still want to turn knobs and dials, but increasingly we’re going to see more higher-level service adoption.
Re:Invent: always a firehose of content
Let’s do a rapid rundown of what to expect this year at re:Invent 2022.
We talked about optimizing cloud spend and that will undoubtedly be a conversation in the “Hallway Track.”
AWS will continue to deliver on the underlying core infrastructure, but we expect messaging around simplifying the experience for organizations. Selipsky is leading AWS into its next phase of growth and that means moving beyond IT transformation into deeper business integration and organizational transformation. And so he’s leading a multivector strategy – serving the traditional peeps who want fine-grained access to core services – we’ll see continued innovation in compute, storage, AI and the like — and simplification through both integration and horizontal apps like Amazon Connect.
As we’ve reported many times, Databricks is moving from its stronghold realm of data science into business intelligence and analytics, whereas Snowflake is coming from its data analytics strength and moving into the world of data science. AWS is going down a path of Snowflake meets Databricks with an underlying cloud IaaS and PaaS layer that puts these three companies on a very interesting trajectory. You can expect AWS to go right after the data sharing opportunity, and in doing so, it will have to address data governance in a more complete fashion.
Price-performance is a topic that will never go away. One thing we haven’t mentioned today is silicon. It’s a topic we’ve covered extensively on Breaking Analysis from Nitro to Graviton, the AWS acquisition of Annapurna, and new specialized capabilities such as Inferentia and Trainium. We’d expect something more at re:Invent. Perhaps new Graviton instances. David Floyer said he’s expecting a complete SoC – system-on-chip – from AWS at some point, an Arm-based server to eventually include high-speed CXL connections to devices and memories. That would be all to address next-gen applications, lower power requirements and lower cost.
Swami Sivasubramanian will give his usual update on ML and AI, building on Amazon’s years of SageMaker innovation. Perhaps there will be a focus on conversational AI or better support for vision. And maybe better integration across Amazon’s portfolio and integration of large language models or neural networks and generative AI.
Security: Always high on the list at re:Invent and of course there’s re:Inforce, AWS’ dedicated conference on security. Here we’d like to see more on supply chain security and perhaps how AWS can help. As well as tooling to make the CISOs life easier. But the key so far is AWS is much more partner/ecosystem-friendly in the security segment than Microsoft traditionally has been. As such, firms such as Okta Inc., CrowdStrike Holdings Inc. and Palo Alto Networks Inc. – and many others – have plenty of room to play in the AWS ecosystem.
We’d expect to hear something about ESG — environmental, social and governance — and hopefully how not only AWS is helping the environment — that’s important — but also how they’ll help customers save money.
And finally – re:Invent is an ecosystem event. It is the Super Bowl of tech events and the ecosystem will be out in full force. Every tech company on the planet will have a presence and theCUBE will be featuring many of the partners from the show floor.
So you’ll definitely want to tune into thecube.net and check out our exclusive re:Invent coverage. We start Monday evening and go wall-to-wall through Thursday. We have three sets at the show and our entire team will be there, so please reach out or stop by and say hello.
Keep in touch
Many thanks to Stu Miniman and David Floyer for their input on today’s episode. And of course John Furrier for extracting the signal from the noise in his sitdown with Adam Selipsky. Kudos to Alex Myerson and Ken Shiffman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at firstname.lastname@example.org.
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.