The growing interest in AI-based tools such as ChatGPT and InVideo AI has not escaped the attention of cybercriminals. Hackers are increasingly using the AI boom as bait to infect computers with ransomware and other malware, according to a recent Cisco Talos report.
Instead of classic phishing campaigns, scammers are creating fake websites and installers impersonating known AI tools. In one case, the name ‘ChatGPT 4.0’ hid the ransomware Lucky_Gh0$t, which encrypts files, deletes larger data and makes system recovery difficult. Other cases included malicious versions of the InVideo AI tools (with the Numero malware) and Nova AI (with the CyberLock ransomware), in which infection leads to loss of data access, system damage or ransom demands – up to $50,000 in Monero cryptocurrency.
The common denominator of these attacks is an attempt to bypass security by using legitimate AI components and manipulating user trust. Cybercriminals are targeting both individuals and companies looking for modern solutions for automation, content generation or lead conversion.
The boom in AI is not only an opportunity for innovators, but also a new area for abuse. In an era of ‘AI for all’, users must learn to recognise false promises and critically verify the sources of downloaded apps. The golden rule remains valid: if something looks too good to be true – it probably is.