Threat actors are no longer trying to break into corporate networks; they are using AI-generated deepfakes and voice synthesis to apply for jobs, pass interviews, and log in with legitimate credentials. This fundamental shift in attack strategy has caught the cybersecurity industry off guard, with security leaders warning that the pace of AI-powered threats has accelerated far beyond the industry's ability to defend against them. How Are Attackers Using AI to Infiltrate Companies? Recent investigations by Microsoft and Cloudflare have documented a sophisticated attack pattern that exploits the human element of hiring and onboarding. A group operating out of North Korea has been using AI to create highly convincing emails, fake personas, and deepfake audio and video capable of speaking on the phone or appearing in video calls with realistic human capabilities. These attackers then use these fabricated identities to apply for information technology roles at global firms. Once hired, these threat actors gain access to company accounts and can exfiltrate intellectual property or install malware without raising suspicion. The attack vector is particularly effective because it bypasses traditional security controls that focus on external threats rather than insider access. "Identity is still the No. 1 access vector. AI is amplifying identity-based attacks. Adversaries no longer break in, they log in," explained Brian Contos, field chief information security officer at Mitiga Inc. Brian Contos, Field Chief Information Security Officer at Mitiga Inc. Cloudflare researchers have documented that attackers are using AI to fabricate profiles, conduct deepfake interviews with audio manipulation, gain trusted access, and then exfiltrate intellectual property from unsuspecting firms. The most alarming finding is that these threat actors are clearing background checks and gaining the ability to use company accounts for virtually any purpose. What Makes This Attack Strategy So Difficult to Detect? The speed and sophistication of AI-powered identity fraud have created a detection gap that traditional security tools cannot close. Unlike malware or network intrusions, these attacks look legitimate from the inside because the attacker has a valid employee account, passes authentication systems, and behaves like a normal user during the initial phases of employment. The premium placed on deepfake impersonation has also led to a rise in credential stealing. In one notable incident, attackers exploited a misconfiguration in a GitHub environment affecting Trivy, an open-source vulnerability scanning tool commonly used in the DevSecOps community. The attackers inserted malicious "infostealer" code into the tool and force updated existing version tags. Infostealer malware is designed to breach systems and steal sensitive data, such as login credentials, financial details, and personal information. This supply chain attack had downstream consequences affecting over 1,000 software-as-a-service (SaaS) environments, with projections that the number of impacted victims could expand to another 500, 1,000, or even 10,000 in the coming weeks and months. Steps to Strengthen Identity-Based Defenses Against AI Threats - Implement Continuous Behavioral Monitoring: Monitor employee accounts for unusual activity patterns, such as accessing systems outside normal working hours, downloading large amounts of data, or accessing resources unrelated to job responsibilities. AI-powered user and entity behavior analytics (UEBA) can detect anomalies that humans might miss. - Enhance Interview and Onboarding Verification: Require in-person interviews or video calls with multiple team members, use liveness detection technology to verify that video participants are real humans, and conduct thorough background checks that include direct contact with previous employers rather than relying solely on automated systems. - Deploy AI-Powered Credential Monitoring: Use security tools that monitor the dark web and credential databases for stolen login information, and implement passwordless authentication methods such as hardware security keys or biometric verification to reduce the risk of credential compromise. - Conduct Regular Supply Chain Security Audits: Audit third-party tools and open-source software for misconfigurations or vulnerabilities, implement software composition analysis (SCA) tools to detect malicious code in dependencies, and maintain an inventory of all software components used in development pipelines. How Fast Are AI-Powered Attacks Really Accelerating? Security leaders at the RSA Conference 2026 in San Francisco painted a picture of an industry struggling to keep pace with the speed of AI-powered threats. George Kurtz, president and chief executive of CrowdStrike Inc., described the situation in stark terms during his keynote address. "It's a breakneck pace that I've never seen in my career in technology. The problem is we're doing 200 miles per hour in the car and we're arguing about what radio station to listen to," said George Kurtz, president and chief executive of CrowdStrike Inc. George Kurtz, President and Chief Executive Officer at CrowdStrike Inc. The acceleration extends beyond identity-based attacks. AI is also being used to automate distributed denial-of-service (DDoS) attacks at unprecedented scale. Cloudflare researchers documented a 730% increase in DDoS attacks over the past 15 months, with attackers using AI to automate target reconnaissance, optimize timing, and generate evasive traffic patterns that bypass traditional defenses. The creation of large, self-managing botnets such as Aisuru has facilitated massive attacks that can cripple infrastructure. Last week, the U.S. Department of Justice announced that it had participated in a court-authorized law enforcement operation to disrupt Aisuru and three other global botnets. These botnets were capable of generating DDoS attacks at over 30 terabits per second, the largest ever seen. "Do you have enough capacity to handle a 30-terabits-per-second attack? A 34-terabits-per-second attack is pretty much going to knock people out. Ask your vendor, 'What is your capacity on a DDoS attack?'" remarked Grant Bourzikas, chief security officer at Cloudflare. Grant Bourzikas, Chief Security Officer at Cloudflare What Are the Emerging Vulnerabilities in AI Tools Themselves? As organizations deploy AI agents to automate business processes, new security vulnerabilities are emerging. One of the most popular AI agents this year has been OpenClaw, open-source software designed to run continuously and act on behalf of users. A February report from Security Scorecard Inc. warned that vulnerabilities in OpenClaw deployments were leaving tens of thousands of internet-facing instances exposed to takeover. When Nvidia Corp. announced that it would launch its own enterprise version of OpenClaw, named NemoClaw, the firm noted that it would implement security protocols that added privacy and cybersecurity guardrails, along with limits to the agent's network access. Security researchers have identified what they call the "lethal trifecta" for AI agents: private data, untrusted content, and external communication. "In order for you to deploy an OpenClaw strategy, you first need to have an OpenClaw security strategy," explained Ken Huang, project lead on the OWASP AIVSS Project that scores AI vulnerabilities. Ken Huang, Project Lead on the OWASP AIVSS Project Can AI Defense Keep Pace With AI Attacks? The cybersecurity industry is betting that AI-powered defense tools can help close the gap. Google announced that its threat disruption unit would pursue a strategy of technical takedowns, legal action, and product hardening to combat growing threats. The company emphasized a philosophy of active defense, which involves using intelligence to protect platforms rather than waiting for attacks to occur. Companies like Zscaler Inc. have expanded their AI Security Suite with new features to provide enterprises with greater visibility and control over how AI is being used across environments. However, security practitioners acknowledge that AI traffic often does not look like normal user activity, making detection and classification a persistent challenge. The cybersecurity industry faces a critical juncture. Threat actors have successfully adopted AI and are deploying it at scale to conduct identity-based attacks, supply chain compromises, and massive DDoS campaigns. The speed of these attacks, combined with the sophistication of AI-generated deepfakes and voice synthesis, has created a defense gap that traditional security tools cannot close. Organizations must act now to implement identity verification controls, behavioral monitoring, and AI-powered defense tools before the gap widens further.