Agent-based AI without governance. The most expensive technology mistake of the decade?

Agent AI promises an operational revolution, but most companies implement it randomly, without real oversight and without measurable business goals. As a result, instead of becoming an efficiency accelerator, this technology increasingly generates risks that can cost organizations real money and reputation.

3 Min Read
Sztuczna inteligencja ai

There is one thing the industry agrees on: agent-based AI is the biggest functional leap since the first big language models. What can be said with equal certainty today is that most companies are completely unready for it. Palo Alto Networks, after speaking to 3,000 European business and security leaders, warns that three out of four agent deployments will have serious security issues. This is not a thesis from an industry futures analysis – it is a description of the facts.

In just two years, we have moved from passive chat to active automation of operations. Agents are making decisions on behalf of the business, triggering transactions, selecting partners, creating content that impacts brand reputation. This is not ‘gen AI copywriter 2.0’. This is a new layer of execution.

Gartner is already predicting increased abandonment of agent-based AI by 2027. MIT describes that most pilots of generative AI in the enterprise have failed to deliver sustainable value. Stanford adds that only 6% of organisations have advanced AI security in place. Against this backdrop, Palo Alto’s statistic of three-quarters of projects with serious difficulties looks downright optimistic.

The root of the problem is not the technology, but the classic pain point of enterprise IT: initiating projects from the tool rather than from the business objective and risk map. Teams choose the framework and LLM first, only then thinking “how to monetise this into an outcome”, while budgets and time usage increase.

Companies also have a dangerous intention to delegate responsibility for AI to the IT department. This is a logical illusion: the agent that performs business activities is not an IT function. It is an extension of management, operations, sales and customer service. If agent governance is not managed at a management level, the consequences of execution errors will have a real, financial cost.

Palo Alto points in a direction that, incidentally, corresponds well with the maturing use of AI in the US: agents need to be embedded in a business strategy, with clear goals, owners and limits to actions. A culture of minimum authority and short-lived credentials should apply to both humans and machines. And projects should not go into production without ‘premortem’ scenarios, precise KPIs and defined stopping points requiring human decisions.

Agent-based AI has a future, but not as an IT curiosity. It will work when – and only when – it becomes part of corporate governance. Then it will stop being a risk and start being an advantage.

Share This Article