There is an almost religious belief in the world of corporate technology that every problem can be solved with the right dose of automation. Especially when it comes to the ‘digital debt’ that is mainframe systems. However, Gartner’s latest forecasts cast a chilly shadow over Silicon Valley’s enthusiasm: by 2026, more than 70% of mainframe exit projects will fail. The reason is not a lack of funding, but an overconfidence in the ‘magical’ power of generative artificial intelligence.
For years, migration from legacy systems was an arduous, costly process with gigantic operational risks. The advent of language models (LLM) promised a breakthrough – the automatic conversion of millions of lines of COBOL code to modern Java or Python. The reality, however, is proving to be more complex. While AI excels at identifying errors or documenting code, the process of actually converting and maintaining mainframe-level performance remains beyond its reach.
The trap of high expectations
The main problem analysts point to is the gap between the marketing promises of vendors and the technological reality. Under pressure from investors, technology companies are pushing AI solutions as a panacea for decades of neglect in IT architecture. This approach ignores the fact that mainframes are not just ‘old computers’. They are unique ecosystems optimised for unprecedented throughput and reliability that cannot be easily replicated in the cloud with a simple rewrite of code by an algorithm.
The risks are quantifiable. Gartner predicts that by 2030, three-quarters of the vendors currently involved in AI-based migrations will simply go out of business. For chief technology officers (CTOs), this means the prospect of being left with an unfinished, critical project and no support from a bankrupt partner.
Technical debt in a new edition
Instead of eliminating technical debt, unreflective AI-driven migration can paradoxically exacerbate it. Generative AI often generates code that ‘works’ but is not optimised for the new infrastructure. As a result, companies risk operational disruption, which in sectors such as banking or logistics can cost millions of dollars per minute.
What’s more, AI can’t transfer the business context accumulated over decades. Code in mainframes often contains business rules that current IT teams have long forgotten exist. A machine, without understanding the logic behind a particular solution, can create a modern system that does not meet specific regulatory or operational requirements.
Back to realism: Strategy instead of magic
Rather than seeking to abandon mainframes completely and abruptly, business leaders should consider a hybrid approach. The news certainly pleases giants like IBM, who have been promoting the idea of upgrading systems in-house for years. Gartner suggests that the key to success is not a ‘great escape’, but selective upgrade of workloads and a focus on improving existing foundations.
An effective strategy requires rejecting the ‘miracle tools’ narrative. Rather than asking how AI can rewrite our code, ask which processes really need to be moved to the cloud and which ones work best in their current, stable environment.
In the coming years, the winners will not be the companies that are quickest to implement AI in migration processes, but those that show the most restraint. In the clash between the promises of artificial intelligence and the brutal logic of mainframes, the latter continues to dictate the terms of the game. Prudent risk assessment and acceptance that some systems are ‘too important to fail’ is now becoming the most innovative approach in business.

