Deepfake Statistics & Trends 2025: Growth, Risks, and Future Insights
Deepfakes are growing at an alarming rate—our 2025 analysis reveals key statistics, emerging trends, and the real risks businesses and individuals face.
Phishing has always been about deception, but deepfakes take it to a new level. Instead of just receiving a suspicious email, today’s victims may hear the voice of their CEO asking for an urgent transfer or see a convincing video of a manager requesting login details. Thanks to generative AI, producing synthetic voices and videos is cheaper, faster, and more realistic than ever before.
The deepfake statistics prove that this is no longer science fiction. Deepfake phishing is spreading across industries, targeting both employees and consumers, and costing businesses millions. In this blog, we’ll dive into the numbers, explore real-world cases, and provide actionable steps you can take to defend against this growing threat.
Deepfake Phishing Statistics You Need to Know
Deepfake phishing scams, utilizing AI-generated voice and video, are becoming increasingly difficult to detect, making it important for individuals and organizations to understand the risks. This article delves into the latest deepfake phishing statistics, shedding light on how these AI-powered phishing attacks are on the rise.
Deepfake Phishing Statistics: Overall Growth and Projections

- Deepfake files surged from 500,000 in 2023 to a projected 8 million in 2025.
- Fraud attempts spiked by 3,000% in 2023.
- In 2023, the number of detected deepfake incidents saw a 10x increase compared to the previous year.
- The volume of deepfake content is projected to increase by 900% annually.
- Deepfake fraud incidents increased tenfold between 2022 and 2023 (Sumsub).
- From 2017 to 2022, there were 22 recorded deepfake incidents.
- In 2023, the number of deepfake incidents nearly doubled to 42.
- By 2024, deepfake incidents increased by 257% to 150.
- In the first quarter of 2025 alone, there were 179 deepfake incidents, surpassing the total for all of 2024 by 19%.
- 680% rise in deepfake activity year-over-year in 2024 (Pindrop 2025 Voice Intelligence + Security Report).
- 26% increase in fraud attempts in 2024 (Pindrop).
- 475% increase in synthetic voice fraud in insurance in 2024 (Pindrop).
- Deepfake fraud could rise 162% in 2025 (Pindrop).
- Contact center fraud could reach $44.5 billion in 2025 (Pindrop).
- Fraud attempts with deepfakes have increased by 2,137% over the last three years (Signicat report).
- Deepfake spear phishing attacks have surged over 1,000% in the last decade.
- 179 deepfake incidents were reported in the first quarter of 2025, marking a 19% rise compared to the total number of incidents recorded in 2024.
- Deepfake videos increased by 550% between 2019 and 2024.
- Deepfake-related phishing and fraud incidents surged by 3,000% in 2023.
- There were 19% more deepfake incidents in the first quarter of 2025 than in all of 2024.
- Deepfakes are now responsible for 6.5% of all fraud attacks, a 2,137% increase from 2022.
- There was a 202% increase in phishing email messages in the second half of 2024.
- Credential phishing attacks increased by 703% in the second half of 2024.
- The global rate of identity fraud nearly doubled from 2021 to 2023.
- The cryptocurrency sector has been especially hit, with deepfake-related incidents in crypto rising 654% from 2023 to 2024.
- There has been a 4x increase in the number of deepfakes detected worldwide from 2023 to 2024, accounting for 7% of all fraud attempts.
- Crypto platforms saw the highest rate of fraudulent activity attempts, which have risen 50% year-over-year, from 6.4% in 2023 to 9.5% in 2024.
- Digital document forgery marked a 244% increase from 2023 and a 1,600% surge since 2021.
- Deepfake fraud attempts increased by a whopping 31 times in 2023 — a 3,000% increase year-on-year.
- Voice deepfakes rose 680% last year.
Deepfake Statistics on Financial Impacts and Losses

- Generative AI fraud in the U.S. is expected to hit $40 billion by 2027 (Deloitte Center for Financial Services).
- In 2024, businesses lost an average of nearly $500,000 per deepfake-related incident.
- Some large enterprises experienced losses up to $680,000 per deepfake incident in 2024.
- Fraud losses in the U.S. facilitated by generative AI are projected to climb from $12.3 billion in 2023 to $40 billion by 2027, with a compound annual growth rate of 32% (Deloitte Center for Financial Services).
- Losses in North America exceeded $200 million in the first quarter of 2025 due to deepfake fraud.
- In February 2024, a finance worker at Arup was tricked into wiring $25 million due to a deepfake video conference call.
- In 2019, a UK energy firm was defrauded of €220,000 via a deepfaked voice clone of its CEO.
- CEO fraud now targets at least 400 companies per day using Deepfakes.
- 77% of victims targeted by a voice clone and who confirmed financial loss reported a financial loss.
- More than 10% of companies have dealt with attempted or successful attempts at deepfake fraud, with damages from successful attacks reaching as high as 10% of companies’ annual profits (Business.com 2024).
- Fraud losses from generative AI expected to rise from USD 12.3 billion in 2024 to USD 40 billion by 2027, growing at a 32% CAGR.
- Financial losses from deepfake-enabled fraud exceeded $200 million during the first quarter of 2025.
- Older Americans reported $3.4 billion in fraud losses in 2023, an 11% rise from 2022.
- 77% of deepfake scam victims ended up losing money, and about one-third lost over $1,000.
- 7% of deepfake victims lost upwards of $15,000.
- 77% of people who were tricked by deepfake scams lost money.
- A third of deepfake victims lost over $1,000, while 7% lost up to $15,000 to fraudsters.
- In the first half of 2023, Britain lost £580 million to fraud, with £43.5 million stolen through impersonations using deepfakes.
- £6.9 million was lost to impersonations of CEOs using deepfakes in the first half of 2023.
Deepfake Phishing Statistics on Detection and Accuracy Challenges

- Human detection rates for high-quality video deepfakes are 24.5%.
- The market for AI detection tools is growing at a compound annual rate of around 28-42%.
- The effectiveness of defensive AI detection tools drops by 45-50% when used against real-world deepfakes outside controlled lab conditions.
- Around 60% of people believe they could successfully spot a deepfake video or image.
- For deepfake images, human accuracy is 62% in controlled studies.
- A University of Florida study found participants claimed a 73% accuracy rate in identifying audio deepfakes but were frequently fooled.
- A 2025 iProov study found that only 0.1% of participants correctly identified all fake and real media shown.
- 70% of people said they aren’t confident they can tell the difference between a real and cloned voice (McAfee 2023).
- The most popular format for deepfake incidents is video, with 260 reported cases.
- In 2024, roughly 26% of people came by a deepfake scam online, with 9% falling victim to them.
- A deepfake attempt occurred every five minutes in 2024.
- Less than 1% of all fact-checked misinformation during the 2024 election cycles was AI content.
- The human element is contained in 68% of breaches (2024 Verizon DBIR).
- 68% of deepfakes are now “nearly indistinguishable from genuine media.”
- 70% of people doubt their ability to distinguish real from fake voices.
Deepfake Statistics on Creation and Accessibility

- Scammers need as little as three seconds of audio to create a voice clone with an 85% voice match to the original speaker.
- The deepfake robocall of President Joe Biden in 2024 cost $1 to create and took less than 20 minutes.
- Deepfake attacks bypassing biometric authentication increased by 704% in 2023.
- Gartner predicts that by 2026, 30% of enterprises will no longer consider standalone IDV and authentication solutions reliable in isolation.
- 40% of people reported they would help if they got a voicemail from their spouse who needed assistance (McAfee 2023).
- One in 10 people report having received a cloned voice message, and 77% of these people lost money from scams.
- 53% of people share their voices online or via recorded notes at least once a week (McAfee 2023).
- Searches for “free voice cloning software” rose 120% between July 2023 and 2024 (Google Trends).
- DeepFaceLab claims that more than 95% of deepfake videos are created with its open-source software.
- The price per minute to purchase a good-quality deepfake video can range from $300 to $20,000 (Kaspersky 2023).
- Searches for “free voice cloning software” rose 120 percent between July 2023 and 2024.
- Email is the most common delivery method for deepfake phishing attacks.
- At least 500,000 video and audio deepfakes were shared on social media in 2023.
- Over 500,000 deepfakes shared on social media in 2024.
- The global Deepfake AI market was valued at USD 563.6 million in 2023 and is projected to reach USD 13,889.8 million by 2032, with a CAGR of 42.79% from 2024 to 2032.
- Voice cloning now requires just three to five seconds of sample audio to create a convincing voice.
- A combination of synchronized impersonations have now reached 33% of cases.
Deepfake Statistics on Sector Vulnerabilities

- The cryptocurrency sector accounted for 88% of all detected deepfake fraud cases in 2023.
- The fintech industry saw a 700% increase in deepfake incidents in 2023.
- 88% of all identified deepfake cases were in the crypto sector, and 8% were in fintech (Sumsub 2023).
- 42.5% of fraud attempts detected in the financial sector are now due to AI (Signicat).
- Deepfake attacks represent around 6.5% of all fraud attempts detected, or 1 in 15 cases (Signicat).
- 25.9% of executives revealed that their organizations had experienced one or more deepfake incidents targeting financial and accounting data in the 12 months prior to 2024.
- 53% of financial professionals had experienced attempted deepfake scams as of 2024.
- 85% of U.S. and U.K. finance professionals viewed deepfake scams as an “existential” threat to their organization’s financial security.
- Just over half of businesses in the U.S. and U.K. have been targets of a financial scam powered by deepfake technology, with 43% falling victim.
- Over 50% of finance professionals in the US/UK had been targeted by a deepfake scam, and 43% admitted they fell for it.
- Cryptocurrency saw almost double the number of fraud attempts compared to any other industry (9.5%) in 2024.
- Lending and mortgages accounted for 5.4% of fraud attempts in 2024.
- Traditional banks accounted for 5.3% of fraud attempts in 2024.
Deepfake Phishing Statistics on Victim Targeting and Demographics

- A 2024 McAfee study found that 1 in 4 adults have experienced an AI voice scam.
- 1 in 10 adults have been personally targeted by an AI voice scam (McAfee 2024).
- 96–98% of all deepfake content online consists of non-consensual intimate imagery (NCII).
- 99–100% of victims in deepfake pornography are female.
- Since 2017, fraud has accounted for 31% of all deepfake incidents.
- Since 2017, 17% of deepfake incidents involved political figures.
- Since 2017, 27% of deepfake incidents featured celebrities.
- Since 2017, explicit content comprised 25% of deepfake incidents.
- In the first quarter of 2025, celebrities were targeted 47 times, an 81% increase compared to the whole year of 2024.
- In the first quarter of 2025, politicians were targeted 56 times, almost reaching the 2024 total of 62.
- Since 2017, celebrities have been targeted in 21% of incidents, totaling 84 cases.
- Elon Musk was targeted 20 times, accounting for 24% of celebrity-related incidents.
- Taylor Swift was targeted 11 times.
- In 38% of cases, celebrity deepfakes were used for fraud.
- Politicians have been involved in 36% of all deepfake incidents since 2017, totaling 143 cases.
- Donald Trump was involved in 25 separate deepfake incidents, amounting to 18% of politician-related deepfakes.
- Joe Biden was involved in 20 incidents of deepfake phishing.
- In 76% of cases, politician deepfakes were used for political purposes.
- The general public was targeted 43% of the time, with 166 deepfake incidents.
- AI is powering phishing attacks, with deepfake impersonations increasing by 15% in the last year, 2024. (TPro, Egress, UpGuard, Trend Micro).
- 60% of consumers saw at least one deepfake video in the last year.
- 77% of voters encountered AI deepfake content related to political candidates leading up to the 2024 US election.
- YouTube has the highest deepfake exposure, with 49% of people surveyed reporting experiences with YouTube deepfakes.
- Deepfake use is now led by video (46%), followed by images (32%) and audio (22%).
- 62% of adult women in the U.S. said they were concerned about the spread of AI-created video and audio deepfakes in August 2023.
- Almost 60% of men also shared this worry about deepfakes.
- Only 1% of women and 3% of men in the U.S. said they weren’t worried at all about deepfakes.
Deepfake Phishing Stats on Organizational Preparedness and Awareness

- Roughly 1 in 4 company leaders admit to having little or no familiarity with deepfake technology.
- 31% of executives state they do not believe deepfakes have increased their company's fraud risk.
- 80% of companies report having no established protocols or response plans for handling a deepfake-based attack.
- More than half of business leaders admit their employees have received zero training on recognizing or dealing with deepfake attacks.
- About 1 in 4 company heads has little or no familiarity with deepfake tech (Business.com 2024).
- Only 5% of company leaders say they have comprehensive deepfake attack prevention across multiple levels (Business.com 2024).
- 65% of Americans express concerns about potential privacy violations stemming from AI technologies.
- 49% of companies experienced both audio and video deepfakes in 2024, up from 37% and 29% respectively in 2022.
- 50% of companies were affected by both audio and video deepfakes in 2024.
- Every second business globally reported incidents of deepfake fraud in 2024.
- 46% of cybersecurity leaders are most concerned about the "advance of adversarial capabilities – phishing, malware development, deepfakes" in terms of AI risks.
- 69% of consumers want better fraud-prevention measures due to deepfakes.
- 72% of consumers report feeling constantly worried about being deceived by deepfakes.
- “Easy” or less sophisticated fraud accounts for 80.3% of all attacks in 2023 — 7.4% higher than last year.
Deepfake Statistics on Regional and Document Trends

- The Asia Pacific region experienced a 1,530% increase in deepfake fraud between 2022 and 2023.
- Deepfake fraud increased by 1,740% in North America and by 1,530% in the Asia-Pacific region in 2022 (Sumsub 2023).
- North America saw a 1,740% growth in deepfake fraud between 2022 and 2023.
- Deepfake fraud cases surged 1,740% in North America between 2022 and 2023.
- The global market value for AI-generated deepfakes will reach USD 79.1 million by the end of 2024.
- Malicious entities have begun using deepfakes to bypass verification checks, with 3,000% attempts so far in 2024.
- Digital document forgeries increased 244% year-over-year in 2024.
- Digital forgeries accounted for 57% of all document fraud in 2024.
- The global AI-generated deepfake market is projected to reach USD 79.1 million by 2024, with a 37.6% CAGR.
- Fraud involving fake or modified documents now outpaces AI-generated scams.
- 80% of Telegram channels have deepfake content.
- The India Tax ID was targeted more than any other document (27%) in 2024.
- Pakistan National Identity Card was targeted 18% in 2024.
- Bangladesh National Identity Card was targeted 15% in 2024.
Why Are Deepfake Phishing Attacks Increasing?
There are three main drivers behind the rise in deepfake phishing:
- Generative AI tools are cheap and accessible. Attackers no longer need Hollywood budgets. A voice can be cloned from as little as 60 seconds of audio.
- Social media creates an endless source of training data. Public videos, interviews, and webinars make it easy to replicate executives’ voices and faces.
- Traditional phishing detection signals no longer work. Old advice like “look for spelling mistakes” or “check the tone” is outdated. AI creates flawless, convincing content.
The result is a perfect storm: scalable, believable, and highly effective phishing campaigns.
Real-World Impacts of Deepfake Phishing
Deepfake technology has moved beyond science fiction with highly convincing scams that leverage AI-generated voices and videos to impersonate trusted individuals, making them incredibly difficult to detect. Understanding the real-world impact of deepfake phishing is significant for individuals and organizations alike to bolster their defenses against these evolving cyber threats:
Financial Fraud
The most common outcome is financial loss. From large-scale wire transfers to fraudulent vendor payments, deepfake scams trick finance teams and procurement officers into approving illegitimate requests.
Data Breaches
Some attacks aim for credentials rather than money. A deepfake impersonation may convince employees to share VPN access or reset account passwords. Once inside, attackers can use attacks like data exfiltration or launch further attacks.
Erosion of Trust
One overlooked impact is psychological: when employees learn about deepfake scams, they may hesitate to trust legitimate instructions from real leaders. This creates friction in everyday business operations.
Deepfake Phishing Trends to Watch in 2025
In 2025, attackers are moving beyond simple impersonation tactics, using AI to create highly convincing voice and video scams. These new methods make phishing harder to detect, especially when combined with smishing or spear-phishing campaigns. Understanding the latest deepfake phishing trends can help businesses and individuals stay one step ahead of cybercriminals:
- Targeting mid-level managers: Not only CEOs — attackers now clone department heads, making scams more believable.
- Deepfake + smishing combos: Voice deepfakes are combined with SMS phishing for multi-channel attacks.
- Scams against customers: Retail banks and telecoms report deepfake calls to customers, tricking them into “verifying” account details.
- AI-powered spear phishing: Attackers tailor deepfakes using personal details scraped from LinkedIn or other platforms.
These trends show a clear shift: attackers are broadening their focus and experimenting with blended techniques.
How to Protect Your Organization Against Deepfake Phishing
Protecting your organization requires a proactive and multi-layered approach, combining advanced technological solutions with comprehensive employee training. By understanding the tactics used by cybercriminals and implementing robust security measures, you can significantly reduce your vulnerability to these insidious attacks.
1. Build Strong Verification Processes
Never rely on a single channel. Sensitive requests (payments, access, credential resets) should require two-step or out-of-band verification — for example, a callback to a known number.
2. Train Employees on Deepfake Risks
Employees need to see and hear how convincing AI voice/video scams can be. Scenario-based Security Awareness Training helps staff recognize red flags and practice secure responses.
3. Run Realistic Deepfake Simulations
Deepfake simulations prepare employees for the shock of receiving a realistic attack. These tools like a allow companies to test responses in safe environments.
4. Deploy Detection Technology
Emerging solutions include liveness detection, voice biometrics, and AI call analysis. While not foolproof, these tools add another layer of defense.
5. Establish Rapid Reporting and Response
Create a clear, simple way for employees to report suspicious messages or calls. Integrate these reports into your incident response playbook.
Quick Checklist for CISOs and Security Leaders
This checklist aims to provide essential considerations and actionable steps to bolster your organization's defenses against these advanced forms of social engineering. By addressing these key areas, you can mitigate risks and safeguard your assets in an increasingly AI-driven threat environment.
- Add “deepfake phishing” to your threat models.
- Require independent verification for financial transactions.
- Run at least one deepfake simulation per quarter.
- Enable fraud detection tools for voice/video communications.
- Monitor vendor and partner processes for fraud vulnerabilities.