Key insights
- Deepfake-related fraud has already caused $2.19 billion in losses globally. Of that amount, a staggering $1.65 billion was reported in 2025 alone. The threat is not slowing down either, as 2026 has already seen $96 million lost to deepfake scams.
- Surfshark's analysis also reveals that the most successful tactic for scammers involves using deepfakes of government officials or celebrities to endorse various investment opportunities. This method alone has caused $1.13 billion in damages, which represents 52% of all reported deepfake-related fraud losses. This is followed by corporate attacks — such as the impersonation of CEOs to request unauthorized transactions — at 25%. Other significant contributors include financial crimes where victims' identities are stolen and scammers use deepfake technology to secure bank loans or drain accounts (9%), followed by deepfaked romance scams (7%), family member impersonation (6%), and various other forms of deepfake-related fraud (2%).
- The United States was the most targeted country globally for deepfake-related scams, suffering $712 million in losses. Of these, 43% occurred in the corporate sector — involving scams in which deepfakes were used to trick organizations into sending money or, in some cases, to place fake candidates in remote jobs. Another 31% of losses resulted from deepfaked investment opportunities. A particularly concerning trend in the US is the impersonation of family members using deepfakes, which has already caused $124 million in losses, or 17% of the US total. While deepfake family member scams have appeared in other countries, the US currently accounts for 99.9% of all such losses globally. However, this concerning trend is likely to become more widespread internationally in the near future.
- Europe has three countries in the top 10 globally for financial losses from deepfake-related scams. The United Kingdom leads among European countries with $149 million in losses, followed by Sweden with $63 million, and Spain with $56 million. In these three countries, 90% of losses were caused by scams that used deepfakes of famous people to endorse various investment opportunities. The remaining losses, particularly in the UK, were due to scams targeting corporations and incidents involving deepfake romance schemes.
- Looking at other top 10 countries, Malaysia ranks second globally with $502 million in losses, 99.7% of which stemmed from deepfaked investment opportunities. Hong Kong follows in third place with $229 million — notably, it is the global leader in deepfake romance fraud, accounting for $105 million in losses. Indonesia, ranking fifth with $139 million, is a major outlier. In this case, deepfake technology was primarily used to bypass bank security measures to secure fraudulent loans, representing 99.4% of the country's total losses.
Methodology and sources
This study used data from the AI Incident Database, Resemble.AI, and the OECD to create a combined dataset covering deepfake incidents from January 2019 to March 2026. Incidents were included if they involved the generation of synthetic videos, images, or audio, and were verified by media reports with clearly documented financial losses. Each deepfake incident was then classified by target country and specific attack vector. These figures represent a conservative estimate based on publicly reported data, and the source databases included records of deepfake incidents across multiple languages.
Attack vectors were classified as follows:
- Celebrity fake investment endorsement: incidents where deepfakes were used to create fake endorsements from famous people to promote fraudulent investment opportunities;
- Government fake investment endorsement: use of deepfakes to impersonate government officials promoting fake investment schemes;
- Romance: general romance scams using deepfakes to create fabricated identities and relationships, not involving celebrities;
- Celebrity romance: scams involving deepfaked celebrities used to deceive victims into online romantic relationships for financial gain;
- Corporate: attacks targeting organizations, typically through CEO or executive impersonation, to authorize fraudulent transfers or gain access to sensitive information;
- Family: scams in which deepfakes were used to impersonate a victim’s relative, often to solicit urgent funds or information;
- Finance: incidents focused on financial fraud, such as taking out loans or credit under a victim’s identity, using deepfaked media for authentication;
- Extortion: cases where deepfake content was used to blackmail or threaten victims, including using synthetic media to fabricate compromising situations;
- Pets: incidents where deepfakes of lost pets were used to demand payment;
- Other: cases that do not fit into the above categories, including novel schemes or unclassified methods.
For the complete research material behind this study, click here.
Data was collected from:
AI Incident Database (2026)
Resemble.AI (2026). Deepfake Incident Database
OECD (2026). AI Incidents and Hazards Monitor.
This post was originally published on Surfshark and is republished here with permission.
Reviewed by Irfan Ahmad.
Read next:
• The End of the Honour System: Rethinking Age Verification Without Sacrificing Privacy
• Google promotes ‘teacher approved’ apps for kids. Here’s what parents should know
