Deepfake Statistics and Trends About Cyber Threats 2024
Explore the alarming rise in deepfake phishing statistics. Learn about the latest trends and essential prevention tips to secure your identity and protect your business against deepfake phishing attacks.
2024-04-16
In 2024, deepfake technology has grown more advanced, creating a new wave of cyber threats. These threats are not just about tricking people anymore. They're about spreading false information, stealing identities, and more.Deepfake technology has emerged as a significant cybersecurity threat, leading to substantial financial losses, operational disruptions, and reputational damage across various sectors. Below are data-backed examples illustrating these impacts:
In 2023, deepfake fraud attempts accounted for 6.5% of total fraud attempts, marking a 2,137% increase over the past three years.
Addressing deepfake incidents can interrupt business activities, incur additional expenses, and lead to further financial losses.
In 2023, a deepfake video falsely depicted Singapore's Prime Minister Lee Hsien Loong endorsing a cryptocurrency platform, leading to public confusion and prompting the Prime Minister's Office to issue a statement debunking the video.
These examples underscore the critical need for organizations to implement robust cybersecurity measures to detect and mitigate deepfake-related threats.
Let's explore the current state of deepfake statistics 2024 and understand the risks deepfake phishing poses.
But before you explore further, to understand how deepfakes are used in cyber attacks, check out our article, "What is Deepfake Phishing?." Discover how to spot and avoid these sophisticated deepfake scams.
What is Deepfake?
Deepfake is a technology trick that makes videos or audio look real, even though they are not. It uses computer programs to change faces or voices in a video. For example, it can make a video where one person's face is put onto another person's body, or it can make someone say things they never actually said.
This is done by using a lot of data and machine learning, a type of artificial intelligence, to understand and copy the way a person looks or sounds. While it can be used for fun, like in apps that change your face, it can also be used in harmful ways, such as spreading false information.
2024 Deepfake Statistics and Current Trends About Cyber Threats
The deepfake phishing statistics are worrying. Because they are effective and dangerous. Here are deepfake fraud statistics and trends you need to know to protect your business:
Surge in Deepfake-Based Identity Fraud
According to deepfake statistics reported by Sumsub, there was a significant increase in identity fraud caused by deepfakes in the U.S. and Canada from 2022 to the first quarter of 2023. The incidence of such fraud in the U.S. jumped from 0.2% to 2.6%, and in Canada, from 0.1% to 4.6%.
Prevalence of Non-Consensual Deepfake Content
A study by Deeptrace AI’s deepfake statistics (now Sensity AI) 96% of deepfake videos online were non-consensual pornography. The number of deepfake videos identified surged to over 85,000 by December 2020.
Public Unawareness and Misconception
Deepfakes statistics reveal a concerning trend: a significant portion of the global population is unaware of deepfakes. 71% of people surveyed by Iproov worldwide said they don't know what deepfakes are. These people also have different levels of confidence in spotting deepfakes.
Difficulty in Identifying Deepfake Audio
Deepfakes statistics revealed by LocalCircles (2023), 25% of individuals have difficulty distinguishing deepfake audio from real audio. This makes them susceptible to deepfake scams.
Increased Cybersecurity Vulnerabilities
Many organizations have noticed generative AI as a cybersecurity threat, according to deepfake phishing statistics of Sapio Research, with 46% of respondents in a survey indicating that it increases their vulnerabilities.
Rise in Cyber Attacks Attributed to Generative AI
According to deepfake fraud statistics revealed by Sapio Research (2023), %85 of security professionals believe that the use of generative AI has contributed to an increase in cyber attacks.
Banning of Generative AI Technologies
According to deepfake fraud statistics revealed by ExtraHop(2023), 32% of organizations reported banning generative AI technologies due to security concerns.
Erosion of Trust Within Organizations
The regular occurrence of deepfake scams could lead to a breakdown of trust among employees, a challenge expected to grow as most online content is projected to be synthetically generated by 2026.
Global Increase in Deepfake Fraud
The use of deepfakes in cyber attacks has dramatically increased worldwide, with notable surges in North America, Asia-Pacific, the Middle East and Africa, and Latin America. The cryptocurrency sector emerged as the most targeted by deep fake-related fraud.
Country-Specific Data on Identity Fraud
Bangladesh, Pakistan, and Latvia were among the countries with the highest rates of detected identity fraud, according to Sumsub's data.
Organizational Experiences with Deepfake Voice Fraud
Many organizations have encountered deepfake voice fraud, highlighting the simplicity of cloning voices with current technology.
Most of the Deepfake Phishing Attacks Delivered via Email
According to Global Incident Response Threat Report on deepfake phishing statistics (2022), 78% of deepfake phishing attacks are delivered via email
Crypto Currency Sector and Deepfake Phishing
According to Sumsub’s deepfake phishing statistics (2023), 88% of AI phishing or deepfake scam techniques target the cryptocurrency sector.
Most Targeted Industry by Deepfake Phishing
Online media is the most targeted sector, with 4.27% in 2023, followed by Professional Services (3.14%) and Healthcare (2.41%) according to Sumsub’s deepfake crime statistics (2023).
Deepfake Phishing on Social Media
Deepfake crime statistics are alarming. According to Reuters, Deep Media predicts that around 500,000 voice and video deepfakes will be shared on social media platforms in 2023. deepfake crime statistics
Fastest Growing Attack in 2023
Deepfake crime statistics from Onfido (2024) highlight a startling trend. Generative AI tools advance, and deepfake technology is becoming more sophisticated and readily available. Shockingly, deep fake-related phishing and fraud incidents have seen an alarming surge of 3,000% in 2023.
Protect Your Business from Deepfake Phishing With Keepnet
Keepnet's social engineering simulations offer a proactive approach, focusing on educating employees about the various forms of deepfake phishing attacks. These simulations are designed to replicate real-life deepfake phishing scenarios, providing a hands-on learning experience. This helps individuals and organizations recognize and respond effectively to phishing attempts.
Below are detailed explanations of Keepnet's phishing simulations:
AI-based Voice Phishing Simulation
- Purpose: Voice simulation is built to imitate voice phishing (vishing) attempts, where hackers use deepfake audio or convincing impersonations over the phone to trick individuals into revealing personal or financial information.
- How It Works: Participants receive simulated vishing calls that utilize AI-generated voices or recordings that closely mimic real individuals, such as company executives or known contacts. These calls may request sensitive data or prompt actions that would compromise security.
- Learning Outcomes: Users learn to identify signs of vishing, such as unusual requests for information over the phone, discrepancies in the caller's story, or subtle anomalies in voice quality. The vishing simulation teaches critical listening skills and reinforces the importance of independently verifying the caller's identity.
SMS Phishing Simulation
- Purpose: Smishing simulation focuses on SMS phishing, where attackers send text messages that appear to be from legitimate sources, tricking recipients into clicking on malicious links or disclosing sensitive information.
- How It Works: Participants receive simulated smishing messages that mimic various tactics used by attackers, such as fake bank alerts, urgent warnings, or attractive offers. These messages are designed to provoke actions that typically lead to data breaches or financial loss.
- Learning Outcomes: Users learn how to recognize legitimate messages from phishing attempts by analyzing the content for suspicious elements such as urgency, misspellings, or unusual sender information. The smishing simulation encourages a cautious approach to responding to unexpected text messages and highlights the importance of not clicking on links or attachments from unverified sources.
Callback Phishing Simulation
- Purpose: This simulation addresses callback phishing, where victims receive a message urging them to call back a number for various reasons, often related to security concerns or account issues. Upon calling back, they're manipulated into giving out confidential information.
- How It Works: Users are presented with voicemails or text messages prompting them to return a call to a seemingly official number. Once they initiate the callback, they're greeted by automated systems or live actors trained to extract information under the guise of resolving a non-existent problem.
- Learning Outcomes: Participants learn the importance of being skeptical of unsolicited callback requests, especially those that convey a sense of urgency or fear. The simulation teaches users to independently verify the authenticity of the contact through official channels, rather than responding directly to unsolicited communications.
Through social engineering simulations and security awareness training, Keepnet aims to test and improve employees' readiness against deepfake phishing attacks. This approach will help employees confidently identify and prevent growing phishing attacks.
Please watch our full product demo on YouTube and learn how we can help you fight deepfake phishing.
Editor's Note: This blog was updated on December 5, 2024.