Artificial intelligence is already powering roughly half of all scams through deepfakes, fake identities, and forged documents, yet only 7% of financial institutions report being more than moderately prepared to defend against AI-driven fraud. The immediate crisis is deepening while a separate long-term threat looms: cybercriminals are experimenting with quantum AI systems that could eventually render today's encryption obsolete. How Are Scammers Using AI to Create Fake Identities? The tools criminals use aren't hidden on the dark web. They're freely available online. During a CBS News investigation, fraud prevention experts demonstrated how easily fake identification documents can be created using publicly accessible software. They generated a fabricated passport populated with both false and real personal information that could pass many verification systems. What makes this particularly dangerous is how little information scammers actually need. Even individuals who carefully protect their personal data can be vulnerable. A single social media post, an old Facebook birthday wish with the wrong date, or information from a data breach can provide enough details to construct convincing fake documents. "I can create a very realistic-looking document that can be used almost to get approved 100% of the time," explained Matt Vega, Chief of Staff at Sardine, a fraud prevention company. Matt Vega, Chief of Staff at Sardine Experts estimate that roughly half of all scams now involve AI tools, including deepfakes, identity theft schemes, and fabricated documents. The sophistication of these tools is staggering. During demonstrations, fraud prevention experts showed how consumer-grade applications can transform anyone's appearance in real time, creating convincing deepfake videos in minutes. - Website Cloning: Scammers can screenshot legitimate websites and use AI to generate near-identical replicas within minutes, designed to steal login credentials or financial information without requiring sophisticated technical skills. - Document Forgery: Publicly available tools allow criminals to create realistic fake passports, driver's licenses, and other identification documents populated with real and false personal information. - Identity Impersonation: AI-powered deepfakes enable scammers to impersonate high-profile figures, business executives, or individuals during video-based identity verification checks. Why Are Organizations Struggling to Defend Against These Threats? The defense gap is enormous. According to research from the Association of Certified Fraud Examiners (ACFE) and SAS, only 7% of respondents said their organization was more than moderately prepared to detect or prevent AI-powered fraud. This gap is widening as threats accelerate. Deepfake technology has become a primary weapon for scammers. The AI Incident Database documented more than 100 distinct deepfake incidents between November 2025 and January 2026, with roughly three-quarters of respondents reporting an uptick in deepfake-driven social engineering over the past two years. Traditional advice for spotting scams, like requesting a video chat to verify someone's identity, no longer works when attackers can create convincing fake videos in minutes. "AI-generated fraud is going to be the big growth industry of all time. It is really easy nowadays to create a deepfake video of someone else," warned Soups Ranjan, CEO of fraud prevention company Sardine. Soups Ranjan, CEO of Sardine Financial institutions face particular pressure because they're bound by strict compliance requirements and customer expectations. Implementing new defense technologies is complex and resource-intensive, yet the cost of inaction is mounting as scammers become more sophisticated. How to Recognize and Defend Against AI-Driven Fraud - Visual Glitches: Look for unnatural facial movements, lack of blinking, or video glitches that may indicate deepfake content, though these signs are becoming harder to spot as technology improves. - Verify Through Alternative Channels: If someone requests sensitive information or money via video, call them back using a phone number you know is legitimate rather than one they provided. - Monitor Personal Information Exposure: Regularly audit your social media accounts and old posts for personal details like birthdates, addresses, or family information that scammers could use to construct fake documents. - Implement Multi-Factor Authentication: Use authentication methods beyond passwords and video verification, such as hardware security keys or biometric verification, to prevent account takeover even if credentials are compromised. What Is the Quantum Computing Threat to Encryption? While organizations struggle with current AI-driven fraud, a parallel threat is quietly emerging. Bad actors are already experimenting with quantum AI systems, according to the ACFE and SAS research. Quantum computing represents a fundamental shift in how computers process information. Unlike traditional computers that use binary code, quantum computers leverage quantum mechanics to process multiple possibilities simultaneously, making them exponentially faster at solving certain types of problems. The encryption protecting financial data relies on mathematical problems that would take traditional computers thousands of years to crack. Quantum computers could solve these same problems in hours or days. According to the ACFE and SAS study, most respondents expect quantum AI to significantly impact fraud prevention by 2030, and roughly 10% report that it is already having an effect. "We're close to where quantum computing is going to break encryption. This goes back to the whole risk that we see with the way we're securing data today. Data is tokenized or encrypted; card numbers are tokenized as they're transmitted as this is a requirement for PCI compliance. If quantum computing is able to break that encryption, then we're ultimately sending card data in the clear and it's setting us back 20 years. Tokenization will mean nothing," warned Tracy Goldberg, Director of Cybersecurity at Javelin Strategy and Research. Tracy Goldberg, Director of Cybersecurity at Javelin Strategy and Research Payment Card Industry (PCI) compliance currently requires that card numbers be tokenized, meaning they're converted into random strings of characters during transmission. If quantum computers can break the encryption protecting these tokens, the entire security model collapses. According to Goldberg, financial institutions would essentially be transmitting card data unprotected, exposing billions of transactions to interception and fraud. Why the Timing of These Threats Creates a Double Crisis Organizations face a compounding problem: today's threats are overwhelming current defenses while tomorrow's quantum threats remain largely unaddressed. The timeline for quantum computing's threat to encryption is uncertain, but experts agree it's approaching. This creates a paradox: data encrypted today could be decrypted by quantum computers in the future, meaning sensitive information stolen now could be compromised years later. The industry is caught between two urgent priorities. Defend against current AI-driven fraud with automated systems and employee training, while simultaneously beginning the transition to quantum-resistant encryption standards. The window to prepare for the quantum era is closing, and those who delay may find their security infrastructure obsolete before they realize the threat has arrived. The message from security experts is clear: the immediate crisis of AI-powered fraud demands action now, but the long-term threat of quantum computing requires parallel preparation. Organizations that address only the present threat will find themselves unprepared for the future, while those who ignore current deepfake and identity fraud risks will face immediate losses.