Data centre market against the wall. Lack of power hinders digital transformation

The panic surrounding the shortage of integrated circuits has given way to an infrastructure crisis, in which the success of an AI project is determined not by code, but by the physical availability of megawatts. The “Time-to-Power” indicator has become the new benchmark for competitive advantage, pushing traditional hardware issues to the margins of operational risks.

8 Min Read
Data center dane

As recently as two years ago, at the height of AI fever in 2024, there was only one question being asked in boardrooms: ‘Where to get Nvidia processors?’ Chip availability was the bottleneck that dictated the pace of technological development. Today, in January 2026, the situation has changed dramatically. Hardware supply chains have cleared, distributors’ warehouses are full of the latest Blackwell and Ruby chips. Yet new data centre investment is stalling.

The question of 2026 is no longer “Do you have the equipment?”, but “Where will you connect it?”. Power Availability has replaced silicon availability as the main operational risk factor. We are entering an era where the success of an AI project is determined by the old analogue power infrastructure rather than digital code.

A new bottleneck. The geopolitics of the socket

The average waiting time for a new power connection of more than 10 MW in Europe’s key hubs has lengthened from 18 months in 2023 to a shocking 4-5 years today. This means that a decision to build a server room taken today will only materialise operationally around 2030-2031. For the technology industry, this is an eternity.

The problem hits the so-called FLAP-D market (Frankfurt, London, Amsterdam, Paris, Dublin) hardest. These traditional data capitals are energy saturated. Grid operators in the Netherlands or Ireland are refusing to issue new connection conditions, citing the risk of destabilising the national energy systems.

In this landscape, Warsaw – emerging in recent years as a key hub for Central and Eastern Europe – has become a victim of its own success. Investments by giants such as Google, Microsoft or local cloud operators have rapidly consumed the available power reserves in the Warsaw agglomeration. Polskie Sieci Elektroenergetyczne (PSE) is facing a physical challenge: the networks in the capital area are not able to accommodate further gigawatt loads without a thorough modernisation that will take years. The result? Investors are forced to look for alternative locations – in the north of Poland (where offshore wind power is easier to come by) or to flee to southern Europe, where solar power is easier to come by.

AI physics: Why do old server rooms ‘melt cables’?

The energy crisis also has a second bottom – technical. Even if a company has space in a server room built in 2020, it often cannot install modern AI infrastructure there. This is due to a drastic change in the so-called power density (Rack Density).

In traditional IT, the standard was 5-8 kW of power consumption per server rack. Power and cooling systems were designed for these values. Today’s AI clusters, based on the Nvidia Blackwell architecture or successors, require between 50 and even 100 kW per rack.

Trying to put such infrastructure into an ‘old’ Data Centre (from 5 years ago) ends in failure. The building cannot deliver that many amps in one place and, more importantly, it cannot dissipate the heat generated. Trying to cool a 100 kW cabinet with traditional air (precision air conditioning) is akin to trying to cool a racing engine with an office fan. It is physically impossible and uneconomic.

The cooling revolution: The end of the air era?

Consequently, 2026 is the moment of the ultimate triumph of Liquid Cooling technology. What was until recently the domain of overclocking enthusiasts and cryptocurrency diggers has become the corporate standard.

Every new Hyperscale development commissioned this year is being designed to a hybrid or all-liquid standard. Two technologies dominate:

  • Direct-to-Chip (DLC): Where the cooling liquid is piped directly to the water blocks on the CPUs and GPUs. This solution has become a warranty requirement for the latest servers.
  • Immersion Cooling: Where entire servers are ‘melted’ in tubs filled with a special dielectric (non-conductive) fluid.

This change is driven not only by physics, but also by EU regulations (EED – Energy Efficiency Directive). Liquid cooling is much more energy efficient and, moreover, allows heat recovery. The fluid leaving the server has a temperature of 60-70°C, which allows the Data Centre to be plugged directly into the municipal district heating network. In 2026, server rooms become de facto digital combined heat and power (CHP) plants, heating office buildings and housing estates, which is key to obtaining environmental permits.

The economics of scarcity: Power Banking and the atom

The shortage of capacity has triggered a sharp rise in prices. Rates for colocation (renting space for servers) in Warsaw and Frankfurt have risen by 30-40% year-on-year. Customers are no longer negotiating prices; they are bidding for who will be the first to sign a contract for ‘powered racks’.

The strategy of developers has also changed. In the real estate market, the phenomenon of ‘Power Banking’ is making waves. Investment funds are buying up old, bankrupt factories, steelworks or industrial plants. They are not interested in the buildings (often destined for demolition), but in the active, high power allocations assigned to the plot. A ‘power right’ is bought to put up containers with AI servers on the site of a former foundry.

At the top of the investment pyramid, we see a shift towards nuclear power. Following in the footsteps of Microsoft and Amazon (high-profile 2024/2025 deals), European players are also looking to power their campuses from small modular reactors (SMRs) or via direct lines (PPAs) from existing nuclear power plants. The IT industry has realised that RES (wind and solar) are too unstable for AI, which has to ‘learn’ 24/7 with a constant load.

A new indicator of success – Time-to-Power

For Chief Information Officers (CIOs) planning strategies for 2026 and 2027, there is one key lesson: Hardware is easy, electricity is hard.

The traditional model, in which servers are ordered first and then space is sought for them, is dead. Today, the process needs to be reversed. Booking Data Centre capacity 12-24 months in advance is a must. The Time-to-Market (time to deploy a product) indicator has been replaced by Time-to-Power (time to get power).

The digital revolution today depends 100 per cent on analogue infrastructure. Without massive investment in transmission networks and new generation sources, artificial intelligence in Europe will hit a glass ceiling – not for lack of data or algorithms, but for the mundane lack of a socket to plug it into.

Share This Article