Deepfakes, the hyper-realistic digital manipulations of audio and video, are being adopted by cybercriminals, hacktivists, and fake news outlets at an alarming rate, according to a new report released by Sensity, a deepfake-monitoring company.
“Cybercriminals, hacktivists, adversarial countries, fraudsters, fake news outlets, and cyber soldiers have quickly incorporated AI technologies into their attack and deception frameworks, faster than anyone in the public and private sectors expected,” states the report’s introduction.
Sensity, founded in 2018 and specializing in deepfake detection, compiled the report using anonymized data from its clients to highlight the growing risks posed by deepfakes in 2023 and the first half of 2024.
Escalating Sophistication
One of the report’s key findings is the increasing sophistication of deepfake technologies and the widespread availability of tools to create them. The ease of access to these tools has fueled their rapid adoption across various malicious activities.
Politicians: Prime Targets
The report reveals that politicians are the most frequently targeted individuals, accounting for nearly 40% of deepfake instances. These deepfakes often feature politicians making false statements to sway elections or public opinion. For example, the report cites a deepfake video of a Ukrainian politician falsely claiming responsibility for a terrorist attack in Moscow.
“Although the election campaign is still in an early phase we have found initial evidence of deepfake weaponisation during the primary election in particular against the main Donald Trump opponents,” noted the report in the context of the upcoming US elections.
Celebrities and Businesses Under Siege
Celebrities and businesses also face significant threats from deepfakes. Celebrities like Tom Hanks, Elon Musk, and YouTuber MrBeast have been impersonated in scams promoting various fraudulent schemes. Businesses have not been spared either, with deepfakes used to facilitate high-stakes fraud. In one notable case, a deepfake scam led to a €23 million transfer to fraudsters.
These scams often proliferate on social media, leveraging the platforms’ viral potential and targeted advertising capabilities. The report indicates that the trading industry is the most targeted, comprising around 35% of deepfake scams, followed by retail and gambling, each at around 15%. Public subsidy scams make up 12.5%.
Bypassing Biometric Security
High-tech scams involving deepfakes are not limited to social media. The report highlights sophisticated attacks where deepfakes bypass biometric verification systems to gain unauthorized access to online banking and financial services.
The Race to Detect Deepfakes
As the threat landscape evolves, technology companies are developing advanced methods to detect AI-generated content. Sensity’s detection tool analyzes pixels and file structures to identify modifications. Similarly, Intel introduced a real-time deepfake detector in 2022 that examines how light interacts with facial blood vessels to spot fake videos. Meta is also planning to label AI-generated content across its platforms to inform users.
The rapid advancement and adoption of deepfake technology underscore the urgent need for robust detection and mitigation strategies. As cybercriminals and malicious actors continue to exploit these tools, the battle against deepfakes remains a critical challenge for both public and private sectors.