Nvidia ‘s decision to fundamentally change the memory architecture in its AI servers could cause an unprecedented price shock across the semiconductor supply chain. According to the latest analysis by Counterpoint Research, server memory prices are on course to double by the end of 2026. The source of the turmoil this time is not a shortage of raw materials, but a strategic reorientation of the AI market leader, which is reaching for solutions hitherto familiar to consumers’ pockets in the search for energy efficiency.
The Santa Clara-based chip giant has begun the process of replacing the industry-standard enterprise DDR5 modules with LPDDR (Low-Power Double Data Rate) chips. This is a low-power technology hitherto the domain of smartphones and tablets. However, this move, prompted by the desire to reduce the gigantic power costs of artificial intelligence servers, creates a problem of scale. A single AI server requires many times more memory than mobile devices, making Nvidia suddenly a customer with a purchase volume comparable to the largest smartphone manufacturers. Counterpoint refers to this phenomenon as a ‘seismic shift’ for which the supply chain is not prepared.
The situation puts the major memory manufacturers against the wall: Samsung Electronics, SK Hynix and Micron. These companies are already operating at capacity, diverting most of their capacity to high-bandwidth memory (HBM), which is needed to power graphics accelerators. The sudden massive demand for LPDDR from the server sector threatens to cannibalise production lines and destabilise the market. Manufacturers that have recently reduced the supply of older memory types will not be able to easily absorb such a large scale of new orders without drastic price adjustments.
The forecasts are unforgiving for end users. Analysts predict that overall memory chip prices will increase by 50 per cent from current levels as early as the second quarter of 2026. Higher component costs will hit cloud providers (hyperscalers) and AI developers directly, putting additional pressure on data centre CAPEX budgets, which are already historically stretched by record GPU spending and energy infrastructure upgrades.
