Deep learning involves training artificial neural networks – computing algorithms designed to interact like the neurons in the human brain – to allow computers to learn from experience. Deepfakes, combining the terms “deep learning” and “fake”, describe a type of synthetic media where an image, video or audio recording of a person is replaced with someone else’s likeness.

The tools needed to generate accurate deepfakes remain publicly available on open-source platforms used by programmers to share and publish code. This makes it possible for moderately capable threat actors to target politicians, celebrities and business executives with disinformation and social engineering scams.

How deepfakes are used in disinformation

The sophistication of deepfakes has made it attractive for threat actors to weaponise in order to spread disinformation. These include influence operations, for example relating to election campaigns, the war in Ukraine and the Israel-Hamas conflict.

In March 2022, a cyber attack targeted Ukrainian television and news outlet Ukraine 24. Attackers reportedly disrupted Ukraine 24’s broadcast and compromised its website to display a deepfake video of Ukrainian President Volodymyr Zelenskiy issuing a surrender to the country’s troops amid the conflict with Russia. The video circulated on social media before being removed from Facebook, X and YouTube. President Zelenskiy was forced to debunk the video on social media.

In 2023, an AI-generated video showing an explosion near the US Pentagon went viral on social media and was reported by Russian news outlet RT. This caused the US Department of Defense and a local fire department to refute its veracity. In June that year, the mayors of Berlin, Madrid and Vienna were tricked into speaking to a deepfake of their Kyiv counterpart Vitali Klitschko, demonstrating the intent of actors to use these technologies to target politicians and that the maturity of the technologies are successfully duping senior politicians.

Deepfakes for fraud

Cybercriminals on the deep and dark web have shown a significant interest and proficiency in using deepfakes to carry out financially motivated campaigns. These include business email compromise (BEC), impersonating banking executives or high-profile clients to make unauthorised transactions and promotional scams, and bypassing account verification measures. Their motivations range from financial fraud and extortion (for example, by blackmailing victims using fake evidence) to money laundering.

These methods have been used for years. According to media reports, in 2024 a finance employee at a Hong Kong-based multinational corporation also transferred USD 25m to fraudsters using a deepfake impersonating senior executives in a video call. A similar incident affected a Japanese company in 2021, leading to an unauthorised transfer of USD 35m to Centennial Bank in the US.  However, the means to create deepfakes is becoming more accessible to threat actors as technology improves. They will soon be a standard resource in attackers' toolkits.

Discussions on underground criminal communities online have also focused on audio deepfakes used to conduct BEC and payment diversion fraud. In 2021, a deepfake audio was used to defraud a UAE-based company of USD 35m, after threat actors convinced an employee the funds were required for the acquisition of another company.

Virtual kidnapping

Another emerging threat from deepfake technology, particularly audio, is the facilitation of virtual kidnappings. Virtual kidnapping involves deceiving victims into believing a loved one has been kidnapped, or that they themselves may be kidnapped. The technique is highly prevalent in Latin America and some countries in the Asia-Pacific region.  

Perpetrators of such scams can have sophisticated surveillance and intelligence-gathering methods. This is used to build a profile of the victim and increase the credibility of their operation.   

The use of deepfake audio to mimic the voice of a “kidnapped” relative better enables perpetrators to deceive their targets and obtain swift payments. The fast pace and high-stress nature of virtual kidnappings limit the victim’s ability to critically assess or question the deepfake in the moment. In 2023, an individual in the US was targeted with a virtual kidnapping attempt involving her daughter. The criminals reportedly used an audio deepfake of her daughter’s voice pleading for help and attempted to extort a million-dollar ransom.

Looking ahead

Criminals on the dark web now offer tailored deepfake audio and visual content. The services provided include bypassing multi-factor authentication (MFA) and video and image manipulation, personalised training lessons, and specialist software sharing. Easy-to-use deepfake tools are being distributed by users on dedicated forums and messaging platforms, while state actors will increasingly use “deepfaked geography” – integrating fake landscape features into maps and satellite imagery to disrupt adversaries’ intelligence collection.

Organisations are increasingly integrating facial or voice recognition into additional verification measures. The development of convincing AI-generated synthetic content will remain an evolving method of identity theft aimed at bypassing security measures. As the technology develops and threat actors continue to share their skills, “deepfake-as-a-service” will become more widespread. Beyond security threats, malicious actors will also look to use deepfakes to cause reputational harm and trigger potential regulatory fines against organisations.

Speech synthesis models such as Microsoft’s VALL-E can clone a person’s voice and effectively replicate their tone, pitch, intonation and emotion. Malicious actors will increasingly integrate such audio deepfakes into their phishing campaigns to increase the victim’s sense of urgency and make social engineering lures more credible. This will make it difficult for employees to question the legitimacy of a request, tricking finance teams into making large wire transfers.

Mitigating the threats from deepfakes

Specialised AI tools and forensic techniques can detect inconsistencies that can help verify whether content has been manipulated by AI such as irregular light and shadow patterns and facial movements. More broadly, training employees to detect deepfakes and conducting deepfake phishing simulation exercises are just as important as technical solutions.  

Organisations and individuals should ensure that MFA is implemented and enabled on all user accounts. The most sensitive accounts should use an effective authentication system with at least two of the following:

  • Something you know (a unique passphrase of reasonable complexity)
  • Something you have (an authentication app tied to a device or a hard token)
  • Something you are (biometric data)

Employees should be trained to understand how facial movements and audio patterns can be manipulated to bypass, for example, know-your-customer (KYC) verification processes. Organisations can also implement cyber security best practices, threat modelling and red-teaming processes to protect against AI-generated content being used for disinformation.  

 

Understanding this evolving threat is crucial for protecting organisations today. Download our whitepaper to learn more about digital threats in the age of artificial intelligence.

 

Digital Threats in the Age of Artificial Intelligence

Understand the threats and risks posed by the rapid evolution of AI in today's world

You may also be interested in

Get in touch

Can our experts help you?