Artificial intelligence is making cyberattacks more convincing and harder to spot than ever before. Hackers are now using AI-generated deepfake videos to impersonate executives and colleagues on live video callsâand they're succeeding. In one documented case, a deepfake video attack resulted in a $25 million wire fraud loss in 2024, even though the target followed standard verification procedures before transferring the funds. This represents a fundamental shift in how criminals operate: they're no longer just sending suspicious emails, they're showing up on your Zoom call looking and sounding like someone you trust. How Are Hackers Using AI in Cyberattacks? According to Microsoft's latest threat intelligence report, attackers are leveraging artificial intelligence across every stage of a cyberattack. Rather than relying solely on technical skills, threat actors now use AI as a force multiplier to accelerate attacks and lower the technical barriers to entry. The technology is being weaponized in ways that affect both the technical and human sides of security. - Phishing and Social Engineering: Hackers use generative AI to draft convincing phishing emails, translate content into different languages, and create realistic fake identities with culturally appropriate names and email addresses tailored to specific job roles. - Voice Cloning and Impersonation: AI can now clone someone's voice in under five minutes, allowing attackers to impersonate executives, vendors, and help desk staff in phone calls to trick employees into revealing sensitive information or authorizing fraudulent transactions. - Malware Development: Threat actors use AI coding tools to generate, debug, and refine malicious code, making it easier to create sophisticated attacks without deep technical expertise. - Infrastructure and Fake Websites: AI helps attackers quickly generate fake company websites, provision infrastructure, and test their deployments to create convincing fronts for fraud schemes. North Korean threat actors tracked as Jasper Sleet and Coral Sleet have been observed using AI to streamline the development of fraudulent digital personas for remote IT worker schemes, according to Microsoft. These actors use AI to generate lists of culturally appropriate names, create matching email address formats, and even review job postings to extract required skillsâthen tailor fake identities to match those specific roles. Why Is Video Deepfaking the Newest Threat? While many organizations have trained employees to spot phishing emails and voice cloning attempts, live video deepfakes represent a gap in most security testing programs. "Video is the attack vector no one is testing," explains Jason Thatcher, CEO and Founder of Breacher.ai, a platform that simulates AI-powered social engineering attacks. "A finance worker can spot a phishing email. They cannot spot a CFO on a live Zoom call." The threat is active and growing. AI social engineering fraud exceeded $200 million in the first quarter of 2025 alone, according to Resemble AI's Q1 2025 Deepfake Incident Report. What makes deepfake video attacks particularly dangerous is that they're interactive and conversationalânot pre-recorded clips. The synthetic participant can respond to questions and adapt to the conversation in real time, making them far more convincing than a static video. Steps to Protect Your Organization From AI-Powered Attacks - Test Your Human Layer: Organizations should conduct simulations that include the full range of AI social engineering threats: phishing emails, voice cloning calls, SMS attacks, and now live deepfake video calls on Teams, Zoom, and Google Meet. Testing should mimic how real attackers operate by combining multiple channels in coordinated campaigns. - Strengthen Identity and Credential Security: Focus on detecting abnormal credential use, hardening identity systems against phishing attempts, and implementing robust verification procedures for high-value transactionsâespecially wire transfers and access requests from executives. - Treat AI-Powered Schemes as Insider Risks: Many AI-enabled attacks rely on compromised legitimate access. Organizations should monitor for suspicious activity from accounts that have been compromised or created using fake identities, and implement additional scrutiny for remote IT worker hiring and onboarding. - Deliver Behavior-Focused Training: New regulations like NIS2 and DORA (Digital Operational Resilience Act) now require organizations to demonstrate actual behavior change, not just training completion. Micro-training modules delivered at the moment of failure are more effective than generic awareness programs. Security teams should also recognize that when AI safeguards attempt to prevent misuse, threat actors are using jailbreaking techniques to trick language models into generating malicious code or content. This means that securing the AI systems themselvesâpreventing unauthorized access and monitoring for abuseâhas become part of the cybersecurity equation. What's Changing in Cybersecurity Right Now? The convergence of AI capabilities and cybercrime represents a fundamental shift in the threat landscape. Microsoft researchers have begun observing threat actors experimenting with agentic AIâtechnology that can perform tasks autonomously and adapt to resultsâthough most AI use today still relies on human operators to make final decisions about targeting and deployment. Early access clients testing deepfake video simulation platforms have reported surprise at how convincing the technology has become. "Users were surprised with how good the deepfakes were. Really crazy talking to a deepfake," reported an IT Manager from a financial services company in the UK. A CEO from a cybersecurity firm noted, "I was expecting a demo, not an episode of Black Mirror. I'm surprised at how advanced it's gotten." The bottom line: AI is making social engineering attacks faster, cheaper, and more convincing. Organizations that continue to rely solely on email filtering and generic awareness training are leaving themselves vulnerable to attacks that can bypass traditional defenses. The human layerâyour employeesâis now the primary target, and they need to be tested and trained against the actual threats they're likely to face.