The $25 Million Deepfake Heist That Changed How Companies Think About Trust
In early 2024, an employee at UK engineering firm Arup received what seemed like a routine video call from senior management requesting a large financial transfer. The employee complied, sending $25 million to the criminals on the other end. The catch: the managers weren't real. They were artificial intelligence deepfakes, and the incident exposed a critical blind spot in how organizations defend themselves.
This wasn't a traditional cyberattack that compromised Arup's digital systems or stole data. Instead, it was what security experts call "technology-enhanced social engineering," where attackers use AI to manipulate human psychology rather than exploit software vulnerabilities . The attack succeeded because it played on something fundamental to how humans communicate: we trust what we see and hear.
The incident reveals a troubling reality about modern cybercrime. While organizations invest heavily in firewalls, encryption, and intrusion detection systems, attackers are increasingly bypassing these defenses entirely by targeting the human element. As Rob Greig, Chief Information Officer at Arup, explained, the technology landscape has shifted dramatically in recent years.
"It's freely available to someone with very little technical skill to copy a voice, image or even a video," said Rob Greig, Chief Information Officer at Arup.
Rob Greig, Chief Information Officer at Arup
To illustrate just how accessible this technology has become, Greig conducted an experiment after the attack. Using open-source software, he created a deepfake video of himself in real time. The process took approximately 45 minutes, and while the result wasn't particularly convincing, it demonstrated how quickly someone with minimal technical expertise could produce a believable impersonation .
What Makes Deepfake Attacks So Difficult to Detect?
The fundamental challenge with deepfake-based fraud is that it exploits the trust humans naturally place in audio and visual cues. When someone sees a video of their CEO or receives a call from a trusted colleague, their instinct is to believe it's authentic. Deepfake technology has become sophisticated enough to pass this basic credibility test, especially in high-pressure situations where employees feel obligated to act quickly.
Unlike traditional cyberattacks that leave digital traces, deepfake attacks leave no compromised systems or stolen data for security teams to detect. The crime happens entirely in the realm of human interaction and decision-making. This is why Greig emphasized the importance of reframing how we think about the threat. The term "deepfake" sounds technical and abstract, but it simply means someone successfully impersonated another person using technology to do so .
The accessibility of deepfake creation tools has democratized this form of attack. What once required specialized knowledge and expensive equipment can now be accomplished with freely available software and a basic understanding of how to use it. This means the pool of potential attackers has expanded dramatically, from sophisticated criminal organizations to individuals with minimal technical skills.
How to Protect Your Organization From Deepfake and Social Engineering Attacks
- Establish Clear Visibility: Organizations need to understand what data is moving through their systems, who has access to what information, and what activity patterns are normal versus suspicious. This visibility allows security teams to detect unusual transactions or communications quickly.
- Create Rehearsed Response Protocols: Rather than waiting for a specific incident to occur, organizations should develop and practice responses to general categories of attacks. When an incident happens, employees know their roles and can respond swiftly without confusion or delay.
- Implement Verification Procedures for High-Value Transactions: For financial transfers above certain thresholds, establish secondary verification methods that don't rely solely on video or audio communication. This might include in-person confirmation, callback verification to known phone numbers, or multi-step approval processes.
- Educate Employees About Impersonation Risks: Help staff understand that audio and visual cues can be faked and encourage them to question what they see, especially when requests involve sensitive actions like fund transfers.
In Arup's case, the company's rapid response was critical to limiting damage. Once the attack was discovered, security teams quickly assessed the extent of the breach and determined that no systems had been compromised and no client data was at risk. This swift action prevented the incident from cascading into a larger crisis .
Why Traditional Cybersecurity Isn't Enough Against This Threat
The Arup incident highlights a fundamental gap in how many organizations approach cybersecurity. Most defenses focus on protecting digital infrastructure: preventing unauthorized access, detecting malware, and securing data. These measures are essential, but they don't address attacks that bypass technology entirely by manipulating human decision-making.
Greig noted that while organizations hear a lot about emerging threats like artificial intelligence and quantum computing, the reality is that many of the same vulnerabilities that have plagued businesses for years remain serious risks. Simple attack vectors like USB devices continue to be used to compromise organizations, and phishing emails remain one of the top methods for compromising individuals, businesses, and even governments .
The deepfake attack on Arup demonstrates that cybersecurity in the modern era requires a hybrid approach. Organizations need robust technical defenses, but they also need to recognize that human psychology is now a critical part of the attack surface. When attackers can create convincing video and audio impersonations in minutes, the traditional assumption that "seeing is believing" becomes dangerous.
This shift represents a fundamental change in how organizations should think about risk. Cyber resilience, according to the World Economic Forum's research on the topic, requires understanding not just technical vulnerabilities but also the human and organizational factors that make attacks successful . The $25 million loss at Arup serves as a stark reminder that in an age of AI-powered impersonation, trust itself has become a security vulnerability that requires active management.