Zoom and Sam Altman's World Partner to Stop AI Deepfakes in Video Calls
Zoom has announced a partnership with World, Sam Altman's human identity verification company, to combat AI-generated deepfakes infiltrating video meetings. The integration uses facial recognition technology to verify that meeting participants are real people, not synthetic imposters. This marks a significant escalation in corporate defenses against a threat that has already cost businesses hundreds of millions of dollars .
Why Are Deepfake Video Call Attacks Becoming a Major Business Risk?
The threat is no longer theoretical. In early 2024, engineering firm Arup lost $25 million after an employee in Hong Kong authorized wire transfers during a video call with the company's CFO and colleagues. Every person on that call, except the victim, was an AI-generated deepfake . A similar attack hit a multinational firm in Singapore in 2025, demonstrating that these attacks are spreading across industries and geographies.
The financial impact is staggering. Financial losses from deepfake-enabled fraud exceeded $200 million in just the first quarter of 2025, according to industry estimates, with the average loss per corporate incident now topping $500,000 . For businesses that regularly conduct high-value transactions over video, this represents a serious and growing vulnerability.
How Does World's Deep Face Verification Technology Work?
Rather than relying on frame-by-frame video analysis, which both companies say is becoming unreliable as AI video models improve, World uses a three-pronged verification approach called World ID Deep Face. The system cross-references three distinct data points to confirm someone is human:
- Registration Image: A signed photograph taken when the user first registered through World's Orb device, a specialized biometric scanner.
- Real-Time Face Scan: A live facial scan captured from the user's device during the meeting.
- Live Video Frame: The actual video frame visible to other meeting participants in the call.
The system only verifies someone when all three elements match, at which point a "Verified Human" badge appears next to that participant's name . Zoom hosts can enable a Deep Face waiting room to require all participants to verify their identity before joining, or participants can request that someone verify themselves on the spot during a call.
"This integration is part of Zoom's open ecosystem approach, giving customers more ways to build trust into their workflows based on what matters most for their use case," said Travis Isaman, a Zoom spokesperson.
Travis Isaman, Spokesperson at Zoom
The partnership reflects a broader strategy by World to embed human verification across digital platforms. Beyond Zoom, Altman's company has been building partnerships with consumer platforms including Tinder and Visa for human verification purposes. Last month, World released technology to verify that real humans, rather than automated AI programs, are behind AI shopping agents at the point of purchase .
Steps Organizations Can Take to Protect Against Deepfake Fraud
- Enable Verification for High-Value Calls: Activate Deep Face waiting rooms for meetings involving financial transactions, executive decisions, or sensitive business discussions.
- Request Mid-Call Verification: Train employees to request that participants verify their identity on the spot if something feels unusual during a video call.
- Implement Multi-Factor Authentication: Combine video verification with other security measures like callback verification for wire transfer requests.
- Educate Teams on Deepfake Red Flags: Help employees recognize unusual behavior, audio delays, or visual artifacts that may indicate a synthetic participant.
The emergence of this partnership signals that video call security is becoming a critical business concern. As AI video generation technology becomes more sophisticated, traditional detection methods are losing effectiveness. The three-point verification system represents a shift toward biometric-based authentication rather than relying on AI to detect AI-generated content. For organizations handling sensitive communications or financial transactions, this technology offers a concrete defense against an increasingly sophisticated threat landscape .