While the attention of markets and technology executives is focused on shortages of GPUs and HBM memory, a new costly bottleneck is forming in the supply chain. The AI boom is starting to drastically drain global storage resources, and Nvidia’s upcoming architecture could seal this trend. SSD prices, which have almost doubled since October, are just the start of a wider phenomenon.
A key factor changing market dynamics is the upcoming Nvidia Vera Ruby platform. According to Citi analysis, a single server based on this architecture will require as much as 1152TB of NAND SSD storage. Given that 30,000 such systems are planned to ship in 2026 and the order book is expected to swell by another 100,000 units a year later, the maths is inexorable. Nvidia’s hardware support alone will consume nearly 150 million terabytes of memory in the next two years alone.
Macro-economically, these numbers are a legitimate concern for non-AI hardware manufacturers. In 2026, it is estimated that Ruby systems will absorb close to 3 per cent of global NAND supply. By 2027, this share will rise to an alarming 9.3 per cent. Note that we are only talking about one Nvidia product here, ignoring the demand generated by competing solutions such as Helios clusters based on AMD Instinct MI400 chips.
The implications for business are clear: the era of low-cost storage is over. The current capacity of NAND factories is insufficient to handle this surge in demand without drastic price adjustments. The domino effect that started in hyperscalers’ server rooms will inevitably spill over to the PC and workstation market. For purchasing departments, this means that budgets will have to be revised. Unless the investment bubble around AI bursts spectacularly, there is little hope of component prices stabilising in the coming quarters. Decisions to modernise the IT infrastructure that are put off “for later” could turn out to be a costly mistake.
