Technological arms race – who is using and who is fighting deepfake?

Izabela Myszkowska
6 Min Read
deepfake
Source: Freepik

The rise of deepfake fraud is accelerating, forcing companies into a technological arms race. As artificial intelligence becomes a weapon in the hands of fraudsters, traditional verification methods are no longer sufficient. It is no longer just finances that are at stake, but also trust – the foundation of the digital economy.

The story of an employee of a multinational corporation who transferred $25 million to fraudsters in February 2024 after a video conference with an AI-generated ‘chief financial officer’ has ceased to be an anecdote from the future. It has become a brutal wake-up call for boards around the world. The threat of deepfake – synthetic, deceptively realistic video, audio and images – has entered the mainstream of cybercrime and is growing exponentially.

Escalating threat in numbers

Market data confirms that there is a global trend. The 2024 Sumsub Identity Fraud Report indicates that already 7% of all global fraud attempts are linked to deepfake technology, a fourfold increase from the previous year.

This trend is particularly evident in European markets. In Germany, one of the region’s key economies, the number of fraud attempts using deepfakes in the first quarter of 2024 increased by 1,100% year-on-year. In parallel, a related problem is growing: the use of synthetic identity documents. In the same country, there has been a 567% increase in the creation of fake identities, which combine real, stolen data with AI-generated elements. This makes verification much more difficult and costly.

Ad imageAd image

At the heart of this phenomenon is the democratisation of advanced AI tools. Models such as generative adversarial networks (GANs), diffusion models and autoencoders, only a few years ago only available in research labs, are now at our fingertips. They analyse huge datasets, learning a person’s facial expressions, voice intonation and characteristics to then generate new fake content that is almost indistinguishable from the original to the human eye and ear.

The implications go far beyond financial fraud. Deepfakes are becoming a powerful tool of disinformation, capable of manipulating public opinion through false statements by politicians or the fabrication of evidence in criminal cases.

The front line: how do companies defend themselves?

Faced with such an advanced threat, companies must adopt a defence-in-depth strategy that combines technology with organisational procedures. It is no longer enough to rely on human vigilance. It becomes crucial to implement a multi-layered security system:

  • Awareness and Procedures: Educating employees about social engineering is fundamental. Firm rules should be put in place, such as requiring additional authorisation of financial transactions by the other person via another communication channel (e.g. telephone or face-to-face meeting), especially for unusual or urgent orders.
  • Detection technologies: modern security systems use AI to combat AI. Tools for **liveness detection** during biometric verification can distinguish a real face in front of the camera from an image, mask or deepfake. They analyse micro-movements, skin texture or light reflections in the eyes.
  • Behavioural Analysis: Systems monitor user activity in real time, looking for anomalies. Suspicious patterns, such as multiple registration attempts with the same data but from different devices, can signal a fraud attempt.
  • Fraud Network Detection: advanced platforms identify links between seemingly independent accounts, revealing organised criminal networks and allowing them to be blocked before they can do more damage.

Law enforcement agencies undergoing adaptation

While the private sector is investing in new defence technologies, law enforcement agencies around the world, including Interpol, are facing the challenge of adapting their investigative methods. Combating deepfakes requires a new set of forensic tools:

  • Metadata Analysis: Examination of hidden information in files to find signs of manipulation.
  • Reverse Image Search: Identification of original image sources.
  • Linguistic Analysis: Detecting inconsistencies in text or speech.
  • Explainable Artificial Intelligence (XAI): The use of AI models that not only classify material as false, but can also indicate which elements determined this, which can be key evidence in court.

Legal uncertainty and the “liar’s dividend”

However, the biggest challenge remains in the regulatory and legal sphere. Criminals operate across borders and the viral nature of digital media makes it extremely difficult to prosecute them. Courts are faced with a fundamental question: how do you prove the authenticity of digital evidence in a world where everything can be faked?

This gives rise to a dangerous phenomenon known as the ‘liar’s dividend’ – the ability to undermine authentic evidence by simply stating that it may be a deepfake. This risks eroding trust in any digital evidence.

The fight against deepfakes is not just a technological arms race. It is a systemic challenge that requires a holistic approach: cooperation between the private sector, which often has more advanced technology, and law enforcement, as well as the creation of a clear legal framework and building public awareness. This is the only way to effectively defend against the new generation of digital threats.

TAGGED:
Share This Article