Until recently, social engineering was mainly associated with a suspicious email from a ‘finance director’ or a phone call asking for a quick transfer. Today, this is not enough. Attackers can join a corporate video meeting, mimic the CEO’s voice and convincingly ask for confidential data – all thanks to deepfakes. Generative artificial intelligence has revolutionised the arsenal of cybercriminals, and many companies are not yet ready for this change.
From gadget to criminal tool
Deepfake technology – until a few years ago a toy for web developers – has become available on a mass scale. Tools for generating synthetic faces, voices and videos are now cheap, easy to use and do not require advanced technical knowledge. All it takes is a few minutes of source material to create a video in which a company’s CEO says things he or she has never said.
This affordability was immediately exploited by cybercriminals. Deepfake is no longer a technological curiosity – it is now a viable tool for fraud, extortion and corporate espionage.
CEO fraud 2.0 – the new face of social engineering
In the traditional model of so-called ‘CEO fraud’, the attacker would impersonate a board member, usually by sending an email with an urgent request to transfer funds. In the age of remote working and video meetings, this scenario has taken on a new dimension. It is increasingly common for victims to speak to a synthetic version of the boss – generated in real time or prepared in advance by the attackers.
Such an attack is difficult to recognise, especially when it takes place in a familiar context: a board meeting, a video conference with an investor, a briefing with a department head. Offenders are able to pick out the tone, the phrasing and even the characteristic gestures of the victim. Trust built up over years of professional relationships is used against employees – and often ends in a costly mistake.
Why classical safeguards have failed
Traditional protection mechanisms – such as passwords, two-factor logins, identity confirmations – were designed for a very different threat landscape. In the age of deepfakes, they are becoming less and less effective.
Voice biometrics? Can be falsified. Video authentication? No longer provides a guarantee. Even a ‘live’ video conference can be manipulated using synthetic facial expressions, rendered in real time. While text-based phishing scams were relatively easy to detect, a synthetic face of a known person presents a completely different calibre of challenge.
Most worrying, however, is that many companies still do not anticipate such scenarios in their risk models. Confidence in ‘image’ and ‘voice’ still sometimes goes unchallenged.
The role of IT departments: from administrator to trust guardian
In the new reality, IT departments and security teams must undergo a transformation – from system operators to active participants in a culture of trust. This means not only implementing new tools to detect synthetic content, but also changing organisational habits.
The first step is education. Employees need to know that what they see and hear can be false – even if it looks and sounds convincing. The next step is to review decision-making and identity confirmation processes: important financial actions or access to data should not rely on one form of confirmation, especially if it is a video call.
Finally – technology. More and more companies are testing ‘synthetic media detection’ class solutions that analyse digital artefacts of footage and pinpoint potentially false content. However, it is still early days in this market and cannot be relied on 100%.
Digital trust must be reinvented
Deepfake challenges the basic assumptions of trust in business communications. Until now, we trusted what we saw and heard. In the new world, this is no longer enough. It is necessary to move to a ‘trust by verification’ model – trust based not on superficiality, but on contextual, behavioural and multi-stage verification analysis.
This is not only a question of technology, but also of organisational culture. Companies that do not redefine their approach to identity verification will be easy targets. It is no longer about protecting IT infrastructure – it is about protecting business decisions, relationships and credibility.