Artificial intelligence, widely recognised as a driver of innovation in business, has become an equally powerful tool in the hands of criminals. The latest Elastic 2025 Global Threat Report, based on analysis of more than one billion data points, sheds light on a worrying trend: the barrier to entry into the world of cybercrime is drastically decreasing and automated attacks are becoming the new industry standard.
The data is clear. Globally, the number of malware created with the support of AI increased by 15.5 per cent over the year. The situation looks much more serious in the Windows environment, where the share of such threats almost doubled, reaching 32.5 per cent. Przemysław Wójcik, president of AMP S.A., Elastic’s partner in Poland, points out that these are no longer just incidental attacks, but mass campaigns targeting critical infrastructure – from the energy sector, through finance, to healthcare.
The operating mechanism of today’s criminal groups is evolving towards a service model. The report reveals that one in eight malware samples is designed to steal data from web browsers. The credentials obtained in this way are not just used for one-off thefts, but go to information brokers, feeding a secondary market. This is what drives attacks on the cloud, where more than 60 per cent of incidents now involve identity theft and unauthorised access.
The financial impact of this ‘revolution’ is measurable. Automated ransomware crippled Change Healthcare’s payment systems in the US, generating losses of **$870 million**, while the Scattered Spider group, using AI-supported social engineering, exposed MGM Resorts to costs in excess of $100 million. The Polish market has also seen the activity of infostealers (such as Lumma and Redline), which have massively taken over bank accounts under the guise of courier companies.
With the democratisation of threats, where advanced darknet tools are available even to amateurs, traditional methods of defence are becoming insufficient. Experts agree: the only effective answer to offensive algorithms is defensive algorithms. Defence must be based on artificial intelligence, but human surveillance and the radical strengthening of identity verification in cloud environments remains a key factor. The arms race between offensive and defensive AI has just entered a new phase.

