EY report: Artificial intelligence is only profitable after a costly start-up

More and more large companies are realizing that artificial intelligence does not always bring immediate profits — it often starts with costly lessons. The latest EY study shows that almost every organization that has implemented AI has incurred initial losses, even though most still believe in its long-term potential.

3 Min Read
Sztuczna inteligencja

A recent EY study found that 99% of large companies that have deployed AI solutions reported at least partial financial losses – the total scale of damages was estimated to be around $4.4bn. The main sources of losses were regulatory non-compliance (57%), performance defects (e.g. models generating errors or biases – 53%), and disruption against ESG targets (55%). Reputational issues and legal risks appeared less frequently.

EY conducted an anonymous survey in July-August 2025 among 975 people with oversight of AI in companies with revenues of more than US$1bn per year. Despite the initial losses, the majority of respondents are optimistic – convinced that AI will bring significant benefits in the long term.

Why talk about “responsible AI”

The EY study focused not on the technology itself, but on the practice of Responsible AI – the set of rules and oversight mechanisms in place within an organisation (policies, monitoring, usage guidelines). Companies with more mature RAI policies declared better results in sales, cost savings and employee satisfaction. For example, organisations with real-time surveillance are as much as 34% more likely to see revenue growth and 65% more likely to see cost savings.

Gaps in the knowledge of boards of directors are a significant wake-up call – only 12% of the C-suite could correctly associate specific AI risks with appropriate controls. Meanwhile, many companies allow so-called citizen developers – employees who implement AI tools themselves – with limited oversight: as many as 60% of companies allow this practice without a full formal framework.

Reflections and risks for the Polish context

EY’s findings are in line with the growing observation that getting into AI involves a costly learning curve. The BCG report, for example, indicates that only 5% of companies are actually generating real value from AI investments – the rest remain in the experimental or minimal impact phase. Moreover, in the context of AI risk disclosure, analysis of SEC reports shows that companies are beginning to recognise the obligation to communicate AI risks – although they often do so superficially.

The EY study makes it clear that financial losses are an almost inevitable cost of entry into the world of AI – but their scale and sustainability depend on the quality of governance mechanisms. When technology operates without robust rules, risk, not potential, gains the upper hand. In an environment where AI begins to be regulated and vetted, responsible integration becomes not so much a luxury as a strategic necessity.

TAGGED:
Share This Article