New Year’s Eve 2025: Farewell to the year of ransomware, hello to the year of deepfakes

The explosion of generative artificial intelligence capabilities has caused deepfakes to rapidly evolve from a technological curiosity into a critical vector for cyberattacks that will completely change the rules of business security in the coming years. We are entering a reality where human senses are helpless against synthetic content, and the burden of verifying the truth falls on the shoulders of advanced IT infrastructure.

9 Min Read
Deepfake

Until recently, in the world of digital security, the simple, analogue rule of thumb was ‘you’ll see it, you’ll believe it’. The year 2024 has brutally verified this assumption, and the coming months will finally bury it in the archives of IT history. Deepfakes have ceased to be an internet curiosity or a tool of political disinformation. Along with ransomware, they have become a major vector for cyber attacks targeting business. We are standing at the threshold of a moment when identity and content authenticity verification will become a key service in every IT integrator’s portfolio.

Modern cyberspace is undergoing a transformation the scale of which is difficult to compare with anything we have seen in the last decade. The advent of cheap, widely available and extremely powerful artificial intelligence tools has made it possible to manipulate audio and video content in ways that are no longer elusive to the human senses. Researchers from leading academia, including experts at the Forensic Media Lab at the University of Buffalo, warn: this phenomenon is just getting started. If today’s synthetic media seem impressive, the deepfakes of 2026 may make distinguishing fiction from reality an impossible task for humans.

Democratisation of fraud – a scale that overwhelms

To understand the gravity of the situation, we need to look at the numbers that best illustrate the dynamics of this market. According to estimates from DeepStrike, a cyber security company, the volume of deepfakes on the web has grown exponentially. From a level of around 500,000 samples in 2023, we have jumped to an estimated 8 million in 2025. We are talking about an annual increase of nearly 900%.

What is driving this avalanche? First and foremost, the drastic lowering of the barrier to entry. Just a few years ago, creating reliable video required powerful workstations, advanced machine learning expertise and gigabytes of training data. Today, the technical threshold has dropped to practically zero.

The advent of enhanced AI applications such as Sora 2 from OpenAI or Veo 3 from Google, combined with a wave of startups offering dedicated tools, has changed the rules of the game. Nowadays, anyone – regardless of intention – can describe an idea, let a language model (such as ChatGPT or Gemini) write a script and then generate high-quality audiovisual material in minutes. AI agents are able to automate this process from A to Z. As a result, the ability to generate consistent deception on a massive scale has been democratised.

We are not talking about a theoretical threat here. Large retailers are being inundated by a wave of up to 1,000 fake calls generated by AI per day. Deepfake has ceased to be a ’boutique product’ used for targeted attacks on CEOs (CEO fraud); it has become a commercial ‘solution’ for cybercriminals for mass extortion, harassment and undermining trust in brands.

The end of the ‘Valley of the Uncanny’ – technology overtakes perception

For a long time, our line of defence was the imperfection of technology. Cyber security experts taught employees to pay attention to the details: unnatural blinking, artefacts around the mouth, strange lighting or ‘metallic’ reverberation in the voice. That era is now coming to an end.

The spectacular improvements that have taken place in recent months are based on fundamental changes to the architecture of generative models. The key here is ‘temporal consistency’ (temporal coherence). Modern video models are able to separate information about a person’s identity from information about their movement. This means that the same movement can be unerringly assigned to different identities, and one identity can perform an infinite range of movements without losing image stability. Gone are the flickers, deformations and structural distortions around the eyes or jaw that used to provide reliable forensic evidence.

Equally, if not more worrying, is the progress in the audio sphere. Voice cloning has crossed the threshold of indistinguishability. It only takes a few seconds of an audio sample to generate a clone that not only sounds like the victim, but retains her natural intonation, the rhythm of her speech and even specific pauses for breath or the emotional colouring of her voice. These features, which previously betrayed the syntheticity of the recording, have virtually disappeared. In everyday situations, especially when talking over communicators with lower transmission quality, this realism is enough to fool even experienced users.

Scenario 2026 – Real-time attack

Looking to the future, analysts and forensic media researchers are plotting a scenario in which static deception will give way to real-time manipulation. This is a frontier whose crossing will change the paradigm of business communications.

We are moving towards live synthesis. Generative models are learning to create live content, rather than providing pre-rendered clips. Moreover, there is a convergence of identity modelling. AI systems are starting to capture and replicate not only what a person looks like, but also their unique ‘behavioural signature’ – the way they move, their gestures in specific contexts, their facial micro-expressions. The end result ceases to be just an image of ‘looking like person X’; it becomes an entity ‘behaving like person X over time’.

Deepfakes of 2026 will aim to evade detection systems by mimicking the nuances of human biology. In a media environment where the audience’s attention is distracted and content spreads faster than any verification (fact-checking), this creates room for abuse with unimaginable destructive potential – from stock market disinformation to sophisticated social engineering within corporations.

The new role of the IT integrator – from securing the network to certifying the truth

Faced with this outlined threat landscape, the IT industry must slap itself on the breast: progress in the development of defence frameworks is disproportionately small compared to the pace of offensive AI development. Despite numerous reports and proposals for multi-layered defences, we still rely on human judgement, which is becoming the weakest link.

This means that the offering needs to be redefined. Traditional security packages that protect endpoints and networks are no longer enough. Business customers will soon need protection at the content infrastructure level.

As the perceptual gap between authentic and synthetic media fades, the line of defence must shift from humans to cryptography. The future lies in solutions that provide ‘secure provenance’ (secure provenance), such as cryptographically signed media at the source of recording (e.g. on camera) and tools that comply with open standards, such as those proposed by the Coalition for Content Provenance and Authenticity (C2PA).

Integrators face the opportunity to become not just suppliers of hardware and software, but guardians of digital trust. Implementing a Zero Trust approach not only to users on the network, but to the very multimedia content transmitted within the organisation, will become the standard required by compliance departments.

Artificial intelligence has given us the tools to create any reality. Now the technology industry needs to provide the tools that allow us to operate safely in this reality. Without them, in a business world based on trust, we risk decision paralysis, where no one will be sure whether they are talking to a key partner or their digital shadow.

Share This Article