Real-World Incidents · 2024–2025

News & Incidents

These are not hypothetical scenarios. Every incident below is a documented, sourced event — companies that lost millions, attacks that were narrowly prevented, and systemic trends that are accelerating. All sourced from major news outlets and security researchers.

$25.6M

Single largest deepfake video fraud (2024)

400%

Surge in voice clone scams (2024→2025)

220%

Rise in North Korean IT worker infiltrations

94%

Increase in UK deepfake attacks (2025)

Deepfake VideoCONFIRMED LOSS$25.6 million

Finance worker pays $25M after deepfake CFO video call

A finance employee at UK engineering firm Arup was tricked into transferring $25.6 million after attending a video conference call in which all other participants — including the CFO — were AI-generated deepfakes. The worker had initial doubts but was reassured by the realistic appearance of colleagues he recognized.

Hong Kong / UK
February 2024
CFO ImpersonationVideo CallWire Transfer
Voice CloneATTACK PREVENTED

Ferrari executive stops deepfake CEO scam with one question

A Ferrari executive received a WhatsApp message from someone claiming to be CEO Benedetto Vigna, asking to authorize a confidential acquisition. The executive grew suspicious of the slightly altered voice and asked a personal question only the real CEO would know — the fraudster immediately hung up. Ferrari's verification instinct prevented a major loss.

Italy
July 2024
CEO ImpersonationWhatsAppVoice Clone
Deepfake VideoATTACK PREVENTED

WPP CEO Mark Read targeted in deepfake WhatsApp scam

Fraudsters created a fake WhatsApp account using a publicly available photo of WPP CEO Mark Read, then set up a Microsoft Teams meeting with a voice clone of Read and another senior executive. The scam attempted to persuade a WPP agency head to set up a new business and provide personal financial information. The attempt was unsuccessful.

United Kingdom
May 2024
CEO ImpersonationWhatsAppTeams Meeting
Voice CloneATTACK PREVENTED

LastPass employee targeted with deepfake CEO audio via WhatsApp

A LastPass employee received a series of calls, texts, and voicemails on WhatsApp from a threat actor impersonating CEO Karim Toubba using AI-generated audio. The employee correctly identified it as a social engineering attempt because the communication was happening outside normal work channels and the urgency was suspicious. LastPass publicly disclosed the attempt to warn others.

United States
April 2024
CEO ImpersonationAudio DeepfakeWhatsApp
Deepfake VideoCONFIRMED LOSS$499,000

Singapore company loses $499K in deepfake CEO video scam

A Singapore-based company lost approximately $499,000 after employees were deceived by a deepfake video call featuring what appeared to be their CEO. The fraudsters used AI-generated video to impersonate the executive during a live call, convincing finance staff to authorize a series of transfers. The incident is one of the first documented cases of real-time deepfake video used in a successful corporate fraud in Southeast Asia.

Singapore
2025
CEO ImpersonationVideo CallWire Transfer
Identity FraudONGOING TREND

North Korean IT workers infiltrate US companies via deepfake interviews

The FBI warned that North Korean state-sponsored operatives are using AI face-swap technology and deepfake video during remote job interviews to gain employment at US companies. Once hired, they exfiltrate sensitive data, install backdoors, and funnel salaries to fund weapons programs. Infiltrations surged 220% over 12 months. Companies in tech, defense, and finance were primary targets.

United States (Global)
2024–2025
State-SponsoredJob InterviewInsider Threat
Voice CloneONGOING TREND$200M+ (Q1 2025)

AI voice cloning scams surge 400% — $200M lost in Q1 2025 alone

A comprehensive industry report documented a 400% surge in AI voice cloning fraud between 2024 and 2025. Losses from voice clone scams exceeded $200 million in the first quarter of 2025 alone, with total imposter scam losses reaching $3 billion in 2024. Small and medium businesses were disproportionately affected due to lack of enterprise security controls.

United States
January 2026
Industry ReportSMB RiskFTC Data
Identity FraudONGOING TREND

Deepfake job applicants can be created in 70 minutes — Palo Alto Networks

Security researchers at Palo Alto Networks documented that fraudsters can construct a fully convincing deepfake job applicant — complete with AI-generated face, cloned voice, and fabricated credentials — in approximately 70 minutes. The technique is being actively used to place fake employees inside organizations, particularly for remote roles with system access.

Global
April 2025
Hiring FraudRemote WorkIdentity Synthesis
Deepfake VideoONGOING TREND

UK deepfake attacks nearly double — 94% rise in 2025

A Sumsub industry report found that deepfake fraud attempts in the United Kingdom nearly doubled in 2025, rising 94% year-over-year. Globally, sophisticated fraud increased 180%. Nearly 3 in 5 European consumers reported being a victim of some form of fraud in 2025, with AI-generated impersonation attacks representing the fastest-growing category.

United Kingdom / Europe
December 2025
Industry ReportUKEurope
Insider ThreatONGOING TREND

Deepfake hiring fraud: fake employees gaining access across organizations

A detailed investigation by The Hacker News documented a systematic pattern of deepfake-driven hiring fraud, where fake employees use AI-generated identities to pass background checks, video interviews, and onboarding. Once inside, they steal data, compromise systems, and evade detection for months. The report identified cases across technology, healthcare, and financial services sectors.

Global
January 2026
Hiring FraudInsider ThreatData Breach

Sources & Accuracy: All incidents are sourced from publicly reported news articles, corporate disclosures, and security research reports. Dollar amounts and statistics reflect figures reported at time of publication. Links open original source articles. This page is maintained as a reference resource for LiveLock product demonstrations.

Every incident above could have been prevented with out-of-band verification.

LiveLock adds the human checkpoint that AI impersonation cannot bypass.