Deepfakes, a portmanteau of “deep learning” and “fake”, are a type of synthetic media in which an image, video or audio of a person is replaced with someone else’s likeness. Deep learning is a subset of machine learning that involves training artificial neural networks – computing algorithms designed to interact like the neurons in the human brain – to allow computers to learn from experience. In other words, it consists of algorithms that train machines to perform tasks like speech, image recognition and natural language processing. 

Deepfakes typically use a type of neural network called an autoencoder, which compresses an image and then learns to reconstruct that image, and generative adversarial networks (GANs). GANs work by generating two networks and training them to compete with each other in the form of a zero-sum game, where one side’s gain is the other’s loss. This competition allows GANs to create realistic images of people who do not exist or produce fake videos of real people. 

Deep disinformation 

Although deep learning technology has potential benefits in sectors such as healthcare, to improve precision medicine and detect diseases, the growing sophistication of deepfakes has also made it attractive for threat actors to weaponise for disinformation and influence operations. In September, former US ambassador to Russia Michael McFaul tweeted that someone with a Washington DC phone number was impersonating him with a deepfake. McFaul warned “you will see an AI [artificial intelligence]-generated ‘deep fake’ that looks and talks like me” and suggested it was set up to “undermine Ukraine’s diplomatic and war efforts”. 

This year, an AI-generated video showing an explosion near the US Pentagon went viral on social media and was reported by Russian news outlet RT, forcing the US Department of Defense and a local fire department to refute its veracity. In March 2022, Ukrainian television and news outlet Ukraine 24 was targeted in a cyber attack. Attackers reportedly disrupted Ukraine 24’s broadcast and compromised its website to display a deepfake video of Ukrainian President Volodymyr Zelenskiy issuing a surrender to the country’s troops amid the conflict with Russia. The video circulated on social media before being removed from Facebook, Twitter and YouTube. President Zelenskiy was forced to debunk the video on social media. 

In June that year, the mayors of Berlin, Madrid and Vienna were tricked into speaking to a deepfake of their Kyiv counterpart Vitali Klitschko. In 2021, several European MPs from Latvia, Lithuania, Estonia, the UK and the Netherlands arranged video calls with a prankster claiming to be a Russian opposition figure. The software needed to generate such deepfakes remains publicly available on open-source platforms used by programmers to share and publish code, making it easier for moderately capable actors to target politicians, celebrities and business executives with disinformation and social engineering scams.  

From fraud to fake kidnapping 

Beyond state actors and influence operations, cybercriminals on the deep and dark web have continued to demonstrate a significant interest in deepfakes to carry out business email compromise (BEC) and promotional scams, as well as to bypass account verification and take control of voice assistants such as Siri and Alexa. The motivations range from financial fraud and extortion (for example, by blackmailing victims using fake evidence) to money laundering. 

Discussions on underground criminal communities have also focused on audio deepfakes that can be used to conduct BEC and payment diversion fraud. In 2021, a deepfake audio was used to defraud a UAE-based company of USD 35m after convincing an employee that the funds were required for an acquisition of another company. In 2019, cybercriminals defrauded a UK-based energy company of USD 243,000 with the help of a deepfake audio of the CEO’s voice. 

Another emerging threat from deepfake technology, particularly audio, is that it can also facilitate virtual kidnappings. Virtual kidnapping involves deceiving victims into believing a loved one has been kidnapped, or that they themselves are under threat of kidnap, a technique that is highly prevalent in Latin America and in certain countries in the Asia-Pacific region. Perpetrators of such scams regularly demonstrate relatively sophisticated surveillance and intelligence-gathering methods to build a profile of the victim and increase the credibility of their operation.  

The potential use of deepfake audio to mimic the voice of a “kidnapped” relative will likely better enable perpetrators to deceive their targets and obtain swift payments, particularly as the fast pace and high-stress nature of virtual kidnappings limits the ability of victims to critically assess or question the deepfake in the moment. In April, an individual in the US was targeted with a virtual kidnapping attempt involving her daughter. The criminals reportedly used an audio deepfake of her daughter’s voice pleading for help and attempting to extort the victim of a million-dollar ransom. 

Looking ahead  

The malicious use of deepfake technology has moved away from being primarily for pornographic content, and such technology has since become a powerful tool in sophisticated politically and financially motivated campaigns. Criminals on the dark web are already offering tailored deepfake audio and visual content to bypass multi-factor authentication (MFA), and provide services related to video and image editing and manipulation, personalised training lessons, and the sharing of specialist software. Easy-to-use deepfake tools are being spread by users on dedicated forums and messaging platforms such as Telegram and Discord, while state actors will increasingly leverage “deepfaked geography” – integrating fake landscape features into maps and satellite imagery to disrupt adversaries’ intelligence collection. 

Organisations are increasingly integrating facial or voice recognition as a means of additional verification, so the development of convincing AI-generated synthetic content will remain an evolving form of identity theft aimed at bypassing security measures. As the technology required to produce deepfakes proliferates, and as threat actors continue to share the skills and techniques to generate AI-generated synthetic content, we anticipate that “deepfake-as-a-service” will become more widespread. 

Speech synthesis models such as Microsoft’s VALL-E can clone a person’s voice and effectively replicate their tone, pitch, intonation and emotion. Malicious actors will increasingly integrate such audio deepfakes into their phishing campaigns to increase the victim’s sense of urgency and make social engineering lures more credible, making it difficult for employees to question the legitimacy of a request and tricking finance teams into making large wire transfers. 

Mitigation  

Organisations and individuals should ensure that MFA is enabled for all critical user accounts, and the most sensitive accounts should use biometric verification that is less exposed to potential threat actors, such as fingerprints. Organisations can also implement MFA techniques that combine facial or voice recognition with other authentication factors such as unique identification codes. Further, employees should be trained to understand how facial movements and audio patterns can be manipulated to bypass, for example, know-your-customer (KYC) processes. 

On a more technical level, the threats posed by such technology can be mitigated by applying digital watermarks to media content, security solutions that use algorithms to detect inconsistencies such as irregular light and shadow patterns and facial movements, and digital signatures that use cryptographic techniques to verify the integrity of the content.  

More broadly, it is important to establish early warning systems for disinformation campaigns by expanding co-operation and intelligence-sharing between public and private organisations, particularly government institutions and social media companies. Organisations can also implement cyber security best practices, threat modelling and red-teaming processes to protect against AI-generated content being used for disinformation. 

Finding this article useful?

Get in touch

Can our experts help you?