The business world is in a state of marked revival. Boomi‘s new survey of 300 business and technology executives clearly indicates the dominant trend: nearly three quarters of respondents (73%) see AI agents as the biggest revolution and opportunity for their companies in the last five years. There is a widespread belief in their potential to transform almost every aspect of the business – from process optimisation to strategic decision-making.
However, beneath the surface of this enthusiasm lies a risk that can be defined by a single number: 98 per cent.
This is the percentage, according to the same study, of AI agents currently deployed that are operating without a fully implemented, consistent and continuous governance system.
Companies have fallen into the trap of ‘blind innovation’. Enthusiasm for the potential of artificial intelligence is leading to it being entrusted with more and more responsibility, often while overlooking fundamental principles of control. This is a strategic dissonance that could cost organisations far more than any previous technological revolution.
Enthusiasm versus reality: the two faces of the AI revolution
The source of the observed optimism is a fundamental change in the nature of AI. It is no longer a discussion solely about passive tools or chatbots. We are talking about AI agents – autonomous entities to which specific tasks are delegated.
The survey shows that managers are increasingly willing to entrust them with areas until recently reserved exclusively for human experts: security risk management and even partial approval of investments and budgets. The problem is that this enthusiasm has radically overtaken the level of organisational preparedness.
The ’98 per cent’ figure in practice means that companies are massively deploying technology whose operation they do not fully understand and over which they do not have full control. This phenomenon can be described as ‘creeping autonomy’. It is not uncommon for boards and IT departments to lose track of exactly what data algorithms process, what processes they control and on what basis they make decisions. There is a belief that AI works for the benefit of the company, but evidence and verification mechanisms are often lacking.
Why do companies choose to ‘drive without holding’?
If the risks seem obvious, why is this state of affairs accepted? The analysis points to three main reasons that create favourable conditions for chaos.
Firstly, market pressures and FOMO (Fear Of Missing Out). In the race for innovation, no organisation wants to be left behind. The prevailing mentality is ‘implement now, manage later’. The trouble is that ‘later’ in the case of autonomous technology can mean ‘too late’. Setting aside procedural issues and governance frameworks is seen as a cost embedded in speed.
Secondly, the misconception of the ‘magic box’. There is still a lingering thinking that AI is a self-learning magic that will ‘manage itself’ and needs no supervision. Advanced algorithms are sometimes treated as flawless oracles, whereas they are only as good as the data they are trained on and the rules they are instilled with. Without these rules, they optimise processes in ways that can prove unpredictable.
Thirdly, a competence deficit. Many companies simply do not know how to effectively manage and control AI agents. There is a lack of market standards, in-house experts and IT departments are often overloaded with implementation alone. It is therefore easier to implement a solution than to build an entire surveillance system from scratch.
Uncontrolled agent: a ticking bomb at the heart of the company
Downplaying the 98% problem is a high-risk strategy. The consequences of a lack of oversight are real and can be severe on at least three levels.
1 Breach of security and compliance (Compliance). AI agents, to be effective, need access to sensitive data. Without strict control over who uses it and for what purpose, a company exposes itself to gigantic risks. This applies not only to breaches of RODO (GDPR) or trade secrets, but also to the unconscious perpetuation of systemic biases (bias).
2 Critical errors. What happens when an autonomous agent ‘hallucinates’ when approving a key budget? Or misjudges investment risks based on faulty data? The more responsibility is given to AI, the more severe the consequences of any errors become.
3 Incorrect business focus. This is the most insidious risk. As Boomi’s material rightly notes, “without control, the performance of AI agents cannot be properly targeted”. This means that a company can invest millions in technology that, yes, ‘works’, but does not necessarily achieve its strategic objectives. This is a waste of resources on pointless optimisation.
The illusion of preparation: Procedures only on paper
Even where companies claim to have some control, this is often illusory. The data presented is alarming:
- Less than a third of companies have any formal governance framework for AI agents.
- Only 29 per cent provide regular training for employees and managers on how to use AI responsibly. (So a powerful technology is being implemented without teaching people how to use it safely).
- Only about a quarter of companies have contingency plans (procedures in the event of an AI failure) or bias assessment protocols in place.
This is not risk management. It is reactive action – waiting for the first major incident to force action. In the current competitive environment, there may be no room for such mistakes.
From blind fascination to mature strategy
An urgent re-evaluation of the approach is needed. The governance of AI agents is not a ‘nice-to-have’ or a bureaucratic brake on innovation. It is an absolute prerequisite for it. It is a control mechanism to direct the enormous power of AI in the right, safe and profitable direction.
The key is a change in mentality, the study suggests: companies need to start treating ‘digital workers’ (AI agents) with the same seriousness as human workers.
No organisation would hire a CFO without giving him/her a background check, a contract, terms of reference and clear rules of accountability. The question should be asked why an AI agent entrusted with budget analysis is handled differently. A ‘background check’ of the algorithms should be initiated – testing them for biases, hallucinatory tendencies and error resilience.
The potential perceived by 73% of managers will only be unlocked if organisations stop ignoring the 98% lack of control problem. The survey shows that companies with advanced AI management are already achieving better business results while protecting themselves from reputational and financial disaster.
The winners will not be those who are quickest to implement any AI, but those who are quickest to learn to manage it professionally. This is the real competitive advantage of the future.
