AI as a weapon, a shield and a risk: How does the CISO need to change its strategy?

Artificial intelligence is no longer just a tool for innovation, but has become a fundamental and complex element of the new business risk landscape. As the latest analysis shows, companies must manage AI as a three-dimensional challenge: it is simultaneously a weapon for attackers, a shield for defenders, and, most importantly, a new internal source of risk associated with the explosion of machine identities.

11 Min Read
sztuczna inteligencja

Philosopher Paul Virilio once said: ‘The invention of the ship means the invention of maritime disasters’. In the context of the current rapid wave of AI adoption, this statement takes on existential significance for business. AI has ceased to be merely a tool for innovation; it has become a fundamental and complex element of the new risk landscape.

For chief innovation officers(CIOs/CTOs), chief security officers (CISOs) and boards, managing AI has become a three-dimensional challenge. The latest analysis, based on the CyberArk 2025 Identity Security Landscape report, shows that artificial intelligence is at once a powerful weapon in the hands of attackers, a key component of the defence arsenal and – most importantly and most often ignored – a new, internal source of risk.

While companies focus on the frontline arms race, the real threat is growing internally. The unchecked adoption of AI is creating a new, invisible class of identity for which traditional security models are completely unprepared.

Face one: AI as a weapon

The first dimension of the AI challenge is the most visible: the offensive use of artificial intelligence by cybercriminals. The data is alarming: as many as nine out of ten organisations reported a successful identity-focused security breach last year. These attacks are becoming increasingly effective as AI has dramatically increased their scale, sophistication and precision.

Traditional phishing attacks are undergoing a revolution. Attackers are using AI to generate emails that are “highly personalised, context-aware and almost indistinguishable from legitimate senders”. They can analyse public data, mimic the tone of communications and automate social engineering campaigns across multiple channels simultaneously.

As a result, more than 75 per cent of respondents admitted that their organisations had fallen victim to successful phishing attacks, including the increasingly common deepfake scams. Worse still, more than half had been victimised multiple times.

This means that the traditional human firewall model – relying on employee training to detect threats – is just about to stop working. No employee, regardless of their level of training, can consistently detect attacks that are near-perfect. The threat is escalating from an IT risk to a direct financial risk, as evidenced by the high-profile case of a million euro phishing scam using deepfake voice.

Face two: AI as a shield

In response to AI-driven attacks, companies are rightly investing in defensive AI. The second face of AI is the role of a shield. The vast majority, 94% of organisations, confirm that they are already using AI processes and language models (LLMs) to strengthen their identity security strategies.

These investments are key. Security operations teams (SOCs) are using AI for advanced analytics and anomaly detection (55% of those surveyed), synthetic identity detection (58%) and incident response automation (51%). In practice, AI can reduce incident response times from hours to seconds, automatically disabling threats and relieving overburdened human teams.

Deploying defensive AI is no longer an option, but a strategic necessity in the ongoing arms race. Relying on human analytics to detect AI-generated attacks is a losing battle in advance.

However, a worrying paradox emerges here. If 94% of companies are using AI to defend themselves, why are 90% of them still experiencing successful breaches? The answer is simple: defensive AI is necessary to stay in the game at all, but it is not a solution in itself. This apparent arms race is distracting boards and CISOs from a much bigger, internal threat that is growing at the same time.

Third face: critical internal risks

The third and most critical face of AI is the risk generated by the organisation itself. The paradox is that the greatest threat comes not from external hackers, but from the organisation’s own employees, acting in accordance with the board’s mandate for innovation.

CyberArk’s analysis reveals a fundamental divide. On the one hand, up to 72% of employees regularly use AI tools in their daily work. On the other hand, 68% of security leaders admit that they lack identity security controls for these particular technologies.

This gap between rapid adoption and slow security is creating the phenomenon of Shadow AI. The report shows that 36% of employees are using AI tools that are not fully approved or managed by IT. Even more worryingly, 47% of companies explicitly admit that they are unable to secure and manage all Shadow AI tools used in the organisation.

For management, the implication is brutal: the innovation mandate is interpreted by business departments as a green light to adopt whatever tools they want, completely bypassing IT and security controls. In practice, this means that the company’s confidential data – source code, financial strategies, customer data – is massively injected into external LLM models, over which the company loses all control. This is not a potential data leak; it is an ongoing data leak.

A new class of threat: AI agents and the explosion of machine identities

The Shadow AI problem is only a prelude to a much deeper, systemic challenge. The rapid adoption of AI is not just creating gaps in policies; it is actively generating a new class of users – machine identities.

The scale of this change is difficult to imagine. The CyberArk report reveals that machine identities (belonging to apps, cloud services, scripts and AI processes) now outnumber human identities by a staggering 82:1 ratio.

What is driving this explosion? It is AI. AI is being touted as the No. 1 creator of new identities with privileged and sensitive access that will emerge in 2025. These identities are not passive. As many as 42% of all machine identities have access to a company’s sensitive data – by comparison, 37% of human users have such access.

Here we come to the most critical conclusion of the entire analysis. Despite this reality, as many as 88% of security leaders admit that in their organisations the privileged user is still defined as exclusively human.

This means that entire identity security (IAM/PAM) strategies at nearly nine out of ten companies are built on a fundamentally false, human-centric assumption. They are systemically blind to 99% (or 82 out of 83) of the identities in their networks, almost half of which have access to the company’s crown jewels.

In this context, a new class of threat is emerging: AI agents. These are not mere scripts, but autonomous machine identities that perceive, reason and act on behalf of the company. Their privileged access, combined with autonomy, represents an entirely new attack vector for which traditional security systems, designed to monitor humans, are not prepared.

Conclusion: resilience in the age of AI

Analysis of the 3D AI challenge leads to one conclusion: investing in an AI Shield to survive AI Weapon attacks is futile if we ignore the Inherent Risk of AI. Companies are massively innovating (72% adoption) without a security foundation (68% gap), creating an army of invisible, privileged machine identities (82:1 ratio) that they cannot control (88% blindness).

To build business resilience, CISOs and CIOs need to convince boards of a strategy based on identity security.

1 A three-tier approach to AI deployments. AI security must be embedded throughout the technology lifecycle:

  • Secure Development: Ensuring that training data is clean and models are created according to security practices.
  • Secure Deployment: The operational environment in which AI operates must be protected by strict identity security measures.
  • Secure Use: Integration of AI with identity security models to protect user access and the agents themselves.

2 Defining strategies for AI agents. As the report states: ‘Machines behaving like humans require both human and machine security controls’. Each autonomous AI agent must be treated as a new ’employee’ with a unique identity. This requires authentication, access management (according to the principle of least privilege) and control of the AI identity lifecycle to prevent unauthorised access.

3 Consolidate and centralise visibility. The problem of ‘identity silos’, which 70% of respondents identify as a source of risk, must be addressed. It is necessary to centralise the management of all identities – human and machine – on a single platform to regain visibility and control.

As the report aptly notes: “AI may rewrite the rules, but identity security controls the risk”. AI-based innovation must go hand in hand with an identity security strategy. Ignoring AI-generated machine identities is not an acceptable risk. It is a straight road to operational and strategic disaster.

Share This Article