The $300 Deepfake Toolkit: Why Banks Can No Longer Trust What They See

For less than $300, a criminal can now defeat a bank's identity verification system and open an account with a fake face in under five minutes. This isn't theoretical. At the 2026 RSA Conference, cybersecurity experts demonstrated how readily available AI tools, stolen data, and virtual camera software are actively being used to bypass the biometric checks that banks rely on today .

The threat is no longer emerging; it's operational. Fraudsters are purchasing complete kits on Telegram, an encrypted messaging app, that include everything needed to impersonate someone: stolen Social Security numbers for as little as $20, background reports for $100, and AI face generators paired with deepfake video software. The criminal marketplace has evolved into a crime-as-a-service ecosystem where even novice bad actors can acquire the skills and tools to commit sophisticated identity fraud .

How Are Criminals Bypassing Bank Security Systems?

The attack follows a straightforward three-step process that exploits a fundamental weakness in how banks verify identity. Criminals use software tools like "ProKYC" to execute this bypass with alarming efficiency .

  • Counterfeit Document Creation: Bad actors generate a fake identity document using stolen personal information and an AI-generated face. They use the same synthetic materials as legitimate governments, such as Teslin substrate, and readily available printers like the Epson H6000III to mass-produce forged IDs that replicate authentic security features.
  • Deepfake Video Generation: Criminals create a deepfake video that maps the AI-generated face onto the facial movements expected by the bank's verification system. These synthetic videos are now convincing enough to fool both automated systems and human reviewers.
  • Direct Injection Attack: Using virtual camera software, fraudsters feed the manipulated video directly into the bank's live verification portal, bypassing the camera entirely. The institution's system receives what appears to be a legitimate camera feed but is actually a synthetic video stream.

During a live demonstration at the conference, cybersecurity experts showed this entire process defeating a financial exchange's liveness check in just five minutes . The speed and simplicity of the attack underscore how far ahead criminals have moved compared to defensive measures.

Why Are Traditional Liveness Checks No Longer Effective?

Banks have long relied on "liveness" checks, which ask users to perform actions like blinking, smiling, or moving their head to prove they are real humans. The assumption was that AI couldn't convincingly replicate these behaviors. That assumption is now dangerously outdated .

Modern generative AI can replicate facial movements, expressions, and visual artifacts with such precision that traditional detection approaches have become unreliable. Researchers have demonstrated that attackers don't even need to present themselves to the camera. Instead, they inject a synthetic video stream directly into the device, making it appear as though a real user is present while bypassing basic liveness checks entirely .

"The research demonstrates why active liveness solutions are particularly vulnerable because they still focus on surface-level signals. They ask users to blink, smile, or analyse basic image artefacts. The problem is that modern AI-generated deepfakes can convincingly replicate these behaviours," explained Dominic Forrest, Chief Technology Officer at iProov.

Dominic Forrest, Chief Technology Officer at iProov

The criminal underground has even created massive databases of user-submitted verification photos and videos to help fraudsters practice and perfect their bypass techniques. This shared knowledge accelerates the evolution of attacks faster than banks can patch their defenses .

What Do Security Experts Recommend Instead?

Rather than trying to detect whether content is real or fake, security experts are advocating for a fundamental shift in how identity verification works. The new approach focuses on proving genuine human presence in the moment, rather than analyzing whether something looks authentic .

This strategy, called "contextual verification," moves away from one-time onboarding checks toward repeatable, risk-based verification that scales with user behavior. Instead of asking "Who is this?" the system asks "What can be proven here and now?" .

One promising technology is passive liveness, exemplified by solutions like iProov's Dynamic Liveness. This approach uses controlled illumination to project a unique, one-time, unpredictable sequence of colors onto the user's face. This interaction cannot be replayed or synthetically generated, simultaneously proving that the person is the right individual, that they are a real human, and that they are authenticating in the present moment .

"By establishing a unified methodology for injection attack detection, CEN 18099 raises the bar for the entire industry. It moves biometric security from a best-effort approach to one grounded in measurable resilience against real-world AI threats," stated Dominic Forrest.

Dominic Forrest, Chief Technology Officer at iProov

Banks should also implement a tiered approach to security. Low-risk tasks can remain seamless with simple on-device biometrics, while medium-risk requests use passive verification. For high-stakes moments like account recovery or large transfers, dynamic liveness provides maximum fraud resilience without sacrificing user experience .

What Is the Timeline for Banks to Respond?

Cybersecurity experts have outlined a specific roadmap for financial institutions to harden their defenses. The timeline is aggressive because the threat is already active in the wild .

  • Within One Week: Designate an AI point person within your organization, someone who inherently understands the technology and can lead the institution's response to AI-enabled fraud.
  • Within Three Months: Map your attack surface to determine exactly where and how threat actors could deploy deepfake tools against the company, its employees, and its customers.
  • Within Six Months: Update your risk frameworks, typically your implementation of the NIST Cybersecurity Framework, and execute the necessary operational and technological changes identified during earlier assessments.

The overarching message from security leaders is clear: the threat of artificial intelligence compromising onboarding processes is not a future concern. It is happening now. Because the criminal underground continues to adapt and innovate its evasion tactics, banks must evolve their defenses at the same pace .

"We can't ignore the AI threat. It's not hype. It's real. With the right people, processes, and tools, we can protect our organizations and our customers," warned Eric Huber, TD Bank's head of adversarial intelligence.

Eric Huber, Head of Adversarial Intelligence and Disruption at TD Bank

The shift from detection-based security to presence-based verification represents the most significant change in identity verification since digital banking began. Banks that continue relying on liveness checks and document verification alone will face mounting fraud losses, regulatory compliance failures, and reputational damage. Those that invest in contextual verification and passive liveness technologies will build genuine resilience against the AI-enabled threats already operating in the criminal marketplace.