AWS and OpenAI are again – formally – intertwining fortunes on many billions of dollars and millions of CPUs. There are several axes of tension around AI today: who controls compute, who closes deals with GPU manufacturers, who builds the hyperscale datacenter and who will be the beneficiary of the next wave of adoption. This deal is a response to both the capital pressures around AI and the shortage of compute power in the market.
Both entities are talking about a seven-year horizon and a value of $38bn. In the context that Amazon’s CapEx spending on data centres alone will exceed $60bn by 2024 according to analyst estimates – this partnership is not an anomaly. It is the next step in an era where compute has become not only a currency, but also a mechanism for controlling narrative and demand.
From OpenAI’s perspective, this is a move calculated to hedge: the company – which has no real infrastructure of its own – needs to have access to a large, stable, low-cost and predictable hardware layer. The GPU market is still tight, even though NVIDIA has been steadily raising volumes for the past year and a half. If OpenAI really wants to maintain the release cycle of new models, this time it can no longer rely solely on Microsoft’s architecture.
From AWS’s perspective – it’s evidence of a recovering position in AI after a weaker 2023, when Google and Azure were quicker to close high-profile partnerships around generative AI. Amazon has Hyperscaler DNA in HPC. It is now demonstrating that it can make OpenAI available to hundreds of thousands of GPUs today and scale to tens of millions of CPUs later. In practice, this means that OpenAI will be able to train the next generation of models on the Ultra EC2 clusters that Amazon is building as its own AI-native fabric.
An element that is not visible in the communication but determines the meaning of this collaboration: Amazon Bedrock. OpenAI’s models have become available there on an API services basis, which brings margins straight into AWS’s P&L. OpenAI gains scale and lower compute cost, and Amazon gains models that previously competed with its own.
If this collaborative model continues, we will see more such multi-year contracts in 2025. Hyperscalers have begun to understand that AI Foundation Models are not yet SaaS play, but HPC play in its pure form. The real moat therefore still goes at the level of silicon, cooling and power.
Industry context is key here. According to Synergy Research’s Q2 2024 data, hyperscalers have committed a record $78bn CAPEX to datacenter and network infrastructures – mainly because of AI. This CAPEX is to be monetised not by selling general purpose cloud, but by selling AI as compute-as-a-service.
Meanwhile, the enterprise market – due to increasing data loads, new regulations and pressure to automate – seems not to be slowing down. OpenAI and AWS are feeling the impetus. The deal may look like an infrastructure transaction. In fact, it is an offensive move to capture a stream of demand that, between 2025 and 2027, will be the market for the largest transfer of computing power in the history of the IT sector.

