DeepSeek V4 relies on Huawei chips

The upcoming launch of DeepSeek V4 marks a pivotal moment for China’s technology sector, shifting the focus from imported solutions to the integration of proprietary software with domestic hardware. The lab’s close collaboration with Huawei on optimizing the new model suggests that Beijing has found a way to circumvent Western infrastructure restrictions, directly challenging Nvidia’s dominance.

2 Min Read
DeepSeek Chatgpt

When DeepSeek released the V3 and R1 models late last year, financial markets reacted nervously. Investors began to ask uncomfortable questions about the point of Nvidia’s multi-billion-dollar infrastructure spend, when the Chinese lab had proven that high performance could be achieved at a fraction of the cost. The upcoming release of DeepSeek V4 is not just another technical update – it is a manifesto of Beijing’s technological independence.

The signal sent by the industry is unmistakable. Instead of the AI industry’s traditional solicitation to optimise for US chips, DeepSeek has completely bypassed US manufacturers. Instead, the lab’s engineers have spent the last few months working hand-in-hand with Huawei Technologies and Cambricon Technologies. The result of this symbiosis has been to rewrite key sections of the model’s base code to squeeze the maximum capability out of the domestic silicon architecture.

The scale of this is reflected in the order books of the Chinese giants. Alibaba, ByteDance and Tencent have decided to make bulk purchases of Huawei’s latest chips, amounting to hundreds of thousands of units. This is a strategic move to secure operational continuity in the face of geopolitical uncertainty and tightening sanctions. The fact that the V4 is being developed in three different variants optimised for Chinese processors suggests that the era of default priority for Santa Clara hardware in China is coming to an end.

From a business perspective, DeepSeek V4 represents a stress test for the thesis of the need for the most expensive infrastructure to build powerful AI systems. If the next-generation model, trained on local hardware with theoretically lower performance than top-of-the-range H100 or Blackwell units, keeps pace with the US lead, the roadmap for AI development will be permanently altered.

Share This Article