Recent days in the IT ecosystem have belonged to projects such as Clawbot, which after a brand dispute now operates as OpenClaw. Reaching 80,000 stars on GitHub in record time and the viral enthusiasm around autonomous assistants performing tasks directly on the user’s computer is a clear signal that the era of simple chatbots is coming to an end. We are entering the phase of Agent AI, i.e. systems that not only suggest solutions but implement them themselves, which represents a fundamental paradigm shift in human-machine interaction.
While the mainstream media gets excited about generating images or writing texts, a much more pragmatic revolution is taking place in the business world. The real money and highest return on investment today is being generated by artificial intelligence in the ‘engine rooms’ of modern businesses, specifically in IT operations and DevOps departments. This is where, away from the limelight, autonomy brings the most tangible financial and operational benefits.
Data as evidence: Where does the heart of adoption beat?
According to recent industry reports covering technology leaders, IT operations is leading the adoption of AI agents, ahead of software engineering and traditional customer service. This distribution of power is not a coincidence, but the result of cold business calculation. The highest expected ROI for agent projects is for systems monitoring, at 44%, putting it far ahead of cyber security or data processing.
Business pragmatism is based on the fact that modern IT environments generate huge amounts of structured, continuous and precise data from logs and metrics. These provide the perfect fuel for autonomous models that can process this information faster and more accurately than any human team. Gartner predicts that by 2029, as many as 70% of companies will have deployed agent-based AI as part of their IT infrastructure operations, a giant leap from just 5% recorded as recently as 2025.
Evolution from observation to autonomy
The evolution of artificial intelligence in the IT environment is proceeding along multiple tracks, gradually changing the role of engineers from ad hoc firefighters to architects of self-regulating systems. The first stage of this transformation is intelligent observability, where an agent not only reports an error, but understands its deep context and can sift the relevant incidents from the information noise. This provides technical teams with a ready-made diagnosis instead of thousands of raw notifications.
The second stage is the real turning point and is autonomous repair. In this scenario, an AI agent, detecting, for example, a memory leak or critical overload, can autonomously take a repair action, such as restarting a container or scaling resources in the cloud, only informing the human of the successful process. Ultimately, we are moving towards holistic orchestration, where agents collaborate with RPA robots and humans in a single ecosystem to automatically update documentation and plan long-term architecture fixes without involving human resources in repetitive activities.
Challenges and barriers to growth
Despite the obvious potential, the path from pilot to full-scale production is riddled with challenges, which McKinsey refers to as the ‘GenAI paradox’. This phenomenon involves the widespread use of technology while failing to have a significant impact on an organisation’s bottom line. The main cause of failure is most often poor data quality, which is responsible for the failure of most projects, as the model in production has to face the chaos of real-world, unstructured information.
An additional problem is the skills gap and the lack of a clearly defined business value before implementation starts. Many companies succumb to the pressure of the trend without defining hard success indicators, leading to project cancellations due to increased costs or inadequate risk control. Indeed, managing a fleet of autonomous agents requires completely different skills and management standards than traditional infrastructure administration, forcing organisations to undergo a deep internal transformation.
Standards as the foundation of the future
One of the most important breakthroughs that is helping to break down these technical barriers is the emergence of the Model Context Protocol. This standard is becoming a universal communication port for artificial intelligence, allowing agents to easily and securely connect to any data source without the need for dedicated integrations. Experts at BCG compare this protocol to the USB-C standard, which drastically reduces technical difficulties and prevents organisations from becoming dependent on a single solution provider.
The adoption of such standards by technology giants and open source foundations marks the maturation of the technology. This allows companies to build flexible architectures in which different AI models work together in a standardised way. It is this standardisation that allows the transition from isolated experiments to scalable production systems that are able to deliver real savings across the enterprise.
Realism instead of promises
Agent-based artificial intelligence in IT operations has ceased to be a futuristic vision and has become a tangible business reality. The companies that are most successful in this area are those that link technology projects to clear business objectives from the outset and invest in data quality and a robust governance framework. Success in this new era does not depend on having the most advanced model, but on an organisation’s ability to redesign its processes to realise the full potential of autonomy.
The question about the future of AI in business is no longer about the capabilities of the technology itself, but about the readiness of companies to put intelligent agents at the helm in key operational areas. In a world of increasing complexity in digital systems, agent-based automation in ITOps seems to be not only a strategic choice, but actually a prerequisite for business continuity and efficiency in the years to come.
