Real-World Incidents · 2024–2025
These are not hypothetical scenarios. Every incident below is a documented, sourced event — companies that lost millions, attacks that were narrowly prevented, and systemic trends that are accelerating. All sourced from major news outlets and security researchers.
$25.6M
Single largest deepfake video fraud (2024)
400%
Surge in voice clone scams (2024→2025)
220%
Rise in North Korean IT worker infiltrations
94%
Increase in UK deepfake attacks (2025)
A finance employee at UK engineering firm Arup was tricked into transferring $25.6 million after attending a video conference call in which all other participants — including the CFO — were AI-generated deepfakes. The worker had initial doubts but was reassured by the realistic appearance of colleagues he recognized.
A Ferrari executive received a WhatsApp message from someone claiming to be CEO Benedetto Vigna, asking to authorize a confidential acquisition. The executive grew suspicious of the slightly altered voice and asked a personal question only the real CEO would know — the fraudster immediately hung up. Ferrari's verification instinct prevented a major loss.
Fraudsters created a fake WhatsApp account using a publicly available photo of WPP CEO Mark Read, then set up a Microsoft Teams meeting with a voice clone of Read and another senior executive. The scam attempted to persuade a WPP agency head to set up a new business and provide personal financial information. The attempt was unsuccessful.
A LastPass employee received a series of calls, texts, and voicemails on WhatsApp from a threat actor impersonating CEO Karim Toubba using AI-generated audio. The employee correctly identified it as a social engineering attempt because the communication was happening outside normal work channels and the urgency was suspicious. LastPass publicly disclosed the attempt to warn others.
A Singapore-based company lost approximately $499,000 after employees were deceived by a deepfake video call featuring what appeared to be their CEO. The fraudsters used AI-generated video to impersonate the executive during a live call, convincing finance staff to authorize a series of transfers. The incident is one of the first documented cases of real-time deepfake video used in a successful corporate fraud in Southeast Asia.
The FBI warned that North Korean state-sponsored operatives are using AI face-swap technology and deepfake video during remote job interviews to gain employment at US companies. Once hired, they exfiltrate sensitive data, install backdoors, and funnel salaries to fund weapons programs. Infiltrations surged 220% over 12 months. Companies in tech, defense, and finance were primary targets.
A comprehensive industry report documented a 400% surge in AI voice cloning fraud between 2024 and 2025. Losses from voice clone scams exceeded $200 million in the first quarter of 2025 alone, with total imposter scam losses reaching $3 billion in 2024. Small and medium businesses were disproportionately affected due to lack of enterprise security controls.
Security researchers at Palo Alto Networks documented that fraudsters can construct a fully convincing deepfake job applicant — complete with AI-generated face, cloned voice, and fabricated credentials — in approximately 70 minutes. The technique is being actively used to place fake employees inside organizations, particularly for remote roles with system access.
A Sumsub industry report found that deepfake fraud attempts in the United Kingdom nearly doubled in 2025, rising 94% year-over-year. Globally, sophisticated fraud increased 180%. Nearly 3 in 5 European consumers reported being a victim of some form of fraud in 2025, with AI-generated impersonation attacks representing the fastest-growing category.
A detailed investigation by The Hacker News documented a systematic pattern of deepfake-driven hiring fraud, where fake employees use AI-generated identities to pass background checks, video interviews, and onboarding. Once inside, they steal data, compromise systems, and evade detection for months. The report identified cases across technology, healthcare, and financial services sectors.
Sources & Accuracy: All incidents are sourced from publicly reported news articles, corporate disclosures, and security research reports. Dollar amounts and statistics reflect figures reported at time of publication. Links open original source articles. This page is maintained as a reference resource for LiveLock product demonstrations.
Every incident above could have been prevented with out-of-band verification.
LiveLock adds the human checkpoint that AI impersonation cannot bypass.