OpenAI, the creator of ChatGPT, is taking another step towards technology independence – this time by announcing a partnership with Broadcom to design its own AI processors. It’s a decision that could define the balance of power in the semiconductor sector, but also reveals growing pressure to secure computing power in an era of generative models.
According to information from both companies, Broadcom is expected to develop and deploy custom OpenAI chips from the second half of 2026. The scale of the project is impressive – we are talking about a 10 GW infrastructure, equivalent to the demand of more than 8 million US households. In an industry where today every available GPU from Nvidia counts, this is an attempt to break out of the role of customer and into the role of core infrastructure developer.
Moving beyond Nvidia, but not without risk
The market reacted enthusiastically to the announcement, with Broadcom shares rising by more than 10%. At the same time, analysts remain cautious. Making your own chips is not only an engineering challenge, but also a financial risk. According to estimates by Nvidia’s CEO, building a single gigawatt data centre could cost up to $60 billion, while OpenAI is planning a project ten times larger.
There is a bigger game going on in the background – the race to control the cost and efficiency of computing. Alphabet, Amazon and Microsoft have been investing in proprietary chips (TPU, Graviton, Maia) for years, but so far none of them have reached Nvidia’s performance on generative models. OpenAI, however, is going one step further: it is not just buying, but designing its own hardware.
How will OpenAI fund the race for silicon?
Financial details of the collaboration were not disclosed, but it is clear that the scale of the investment requires a new funding architecture. According to analysts, preorders, strategic investments and support from Microsoft – OpenAI’s largest shareholder – will come into play. Here a paradox is revealed: in order to become independent of Nvidia, OpenAI needs to rely even more heavily on Big Tech’s capital backing.
A week earlier, OpenAI signed a deal with AMD to supply 6GW of power with an option for a stake in the chipmaker. In parallel, Nvidia announced an investment of up to $100bn in AI startups, offering them complete data centre systems. The battle has moved from R&D labs to investment budgets.
Broadcom – the quiet winner of the AI boom
For Broadcom, this is another sign that the boom in generative AI is transforming it from a network infrastructure provider to a key semiconductor player. The company had already disclosed a $10bn order for AI chips from an anonymous customer (who was allegedly not OpenAI). Since the end of 2022, its shares have risen almost sixfold – a rate unprecedented even among GPU manufacturers.
What’s more, the new OpenAI systems are expected to scale with Broadcom Ethernet, directly challenging the dominance of InfiniBand from Nvidia. If this experiment succeeds, it is not just Nvidia’s GPUs that will be threatened – but its entire ecosystem.
The race for silicon is just beginning
The 2026 deadline seems aggressive. But the pressure is clear: to keep up the pace of model development, OpenAI needs its own foundations. Today, it is becoming not just an AI company, but – potentially – a semiconductor company. If this move succeeds, it could change not only the AI market, but also the way future supercomputers are built.