More and more often, deceptively real-looking videos appear in which celebrities say or do things that you wouldn’t expect them to do: Barack Obama, for example, attacks his successor, or Mark Zuckerberg confesses that he has passed on all Facebook user data to the public … But these people have not really said or done anything. These are so-called Deepfake videos, which were produced with the help of artificial intelligence.

Even if a large number of such videos can still be exposed as fake relatively quickly, this technology is constantly improving and has a high potential for danger. It is becoming increasingly difficult to detect such false information – in addition, cyber criminals are also using deepfakes to attack companies with new scams: For example, The Wall Street Journal reported on an unspecified British company that fell victim to a deepfake attack. In this case, the voice of the CEO of the German parent company was authentically imitated using AI-based software and the head of the British company was successfully asked to transfer 243,000 US dollars to a foreign bank account.

Due to these developments Hornetsecurity wants to take a deeper look at the topic of Deepfakes in the following.

What is a Deepfake?

Deepfakes are manipulated video and audio files that imitate the biometric characteristics of a person with details such as appearance, facial expressions or voice in a deceptively real way. The term is composed of Deep Learning, which describes a special AI technology and the practice of imitating someone or something aka fake.

To create Deepfake videos, artificial neural networks, which have a certain learning ability, are fed with image or video material. Based on the source material, the AI software can learn to represent the person to be imitated in a different context. The quality of the result depends on the extent of the source material and on how many layers the neural network used has – i.e. how “deep” it is. To create the imitation, two algorithms work in interplay: While one creates the fake, a second checks the result for errors. The authenticity of the fake increases with the number of repetitions of this learning process.

But videos aren’t the only thing that can be faked with AI and Deep Learning, also fake voices can be created with a similar method.

Why are the number of Deepfakes increasing?

Not very long ago faces in videos could only be replaced by elaborate CGI effects with expert knowledge and high costs, this is now also possible for IT laymen thanks to freely available AI software such as DeepFaceLab. Even expensive hardware is no longer necessary. Users who have a graphics card that is too weak can, for example, use Google’s Colab to have up to twelve hours of AI training carried out in the cloud. Once the program has been fed with material, it creates the manipulation as far as possible automatically. In addition, the deep learning mechanisms are constantly evolving and require fewer and fewer recordings. While originally several hours of video sequences were necessary, some AIs only need a few images to exchange faces.

The process of imitating a voice is similar: programs such as Lyrebird need only a few minutes of audio material to generate credible imitations.

While celebrities have been the main targets so far, the case described at the beginning shows that cyber-criminals are also using the technology to attack companies.

Which deepfake attacks should companies expect?

The IT security experts at Hornetsecurity see a high risk potential in two particular areas: One scam is the so-called CEO fraud, in which cyber criminals pretend to be executives in emails with a personal address and try to persuade employees to pay large sums of money. With the Deepfake technology it is now possible to drastically increase the credibility of this CEO fraud scam by attaching fake video or audio files.

Due to the rapid development of the technology, it is also conceivable that the fraudsters could even contact employees directly by phone or video call and impersonate the CEO in real time. The case of the British company mentioned at the beginning of this article shows that this procedure has already been successfully applied in practice: the CEO held a caller to be his superior managing director of the German parent company and, on his instructions, transferred 243,000 US dollars to a supposed supplier.

Another tactic could also become a problem: Cyber criminals create deepfakes in which executives talk about their own company and announce, for example, insolvency. They threaten to pass the material on to the media or publish it on social media channels.

How can deepfake attacks be detected and prevented?

With deepfakes that are intended to enter the company via email, there is a chance that spam and malware filters may block the email and prevent the attached or linked audio or video files from being opened. However, the filters are not able to recognize deepfakes per se, but analyze the messages, for example, to determine whether the domain, IP address or sender is blacklisted or whether they contain harmful links or attachments.

The more focused and individual the attacker is, the greater the probability that the attack will reach its target successfully.
It becomes particularly dangerous if the attack is carried out via telephone or video call, as there are no security mechanisms that can intervene in this case.

The IT security experts at Hornetsecurity therefore emphasize that it is crucial to sensitize employees and managers to this new threat scenario. Only a sufficient awareness of this form of attack can provide effective protection at the moment.