OpenAI – the company that has put ChatGPT at the centre of the global AI revolution – is expanding its hardware facilities by reaching for computing chips from… competitors. As reported by Reuters, the company has begun leasing TPUs (Tensor Processing Units) from Google Cloud to meet the growing demand for computing power. It’s a move that could change the balance of power across the AI cloud landscape.
Until now, OpenAI has relied almost entirely on infrastructure from Microsoft – a major investor and technology partner – and GPU chips from Nvidia, the undisputed market leader in AI hardware. The decision to use TPUs from Google is therefore a significant step not only technologically, but also strategically.
TPUs are Google’s proprietary chips, designed specifically for processing machine learning tasks. Although until recently they were reserved almost exclusively for internal use, Google is increasingly making them available to external partners. Customers have already included Apple, Anthropic or Safe Superintelligence. Now OpenAI is also joining.
Why is OpenAI deciding to take this step now? Two reasons may be key: cost and diversification. As the scale of ChatGPT usage grows, so does the cost of so-called inference, the process by which an AI model analyses new data and generates responses. TPUs may offer a more cost-effective alternative to expensive GPUs from Nvidia.
On the other hand, it’s also a move that reduces OpenAI’s dependence on Microsoft, which not only provides the company with infrastructure but also develops its own models that compete with ChatGPT (Copilot, Azure AI Studio). The partnership with Google could give OpenAI greater operational independence, even if Google – also a competitor in generative AI – does not make all the latest generations of its TPUs available.
From Google’s perspective, on the other hand, gaining such a high-profile customer is a big signal to the market. The company has been developing its cloud ecosystem for years, but has until now been overshadowed by AWS and Microsoft Azure. Allowing OpenAI to use TPU is not only an example of ‘monetising’ its own AI technology, but also a way of building its position in the war for corporate customers.
This is not yet a break with Nvidia or Microsoft – rather a considered attempt to build a more balanced computing ecosystem. But in the context of growing tensions and rivalries in the AI sector, it’s a move that could herald a larger remake in cloud geopolitics.
For now, this is an experiment. But if Google’s TPUs prove to be cost-effective, OpenAI could initiate a trend that will challenge Nvidia’s dominance and strengthen Google Cloud’s position as a computing platform for large-scale AI models.