AI-Powered Scam Factories Now Dwarf the Global Drug Trade, Experts Warn

Artificial intelligence has transformed cybercriminals from lone operators into organized industrial operations that now generate more revenue than the global drug trade, according to fintech experts speaking at Money20/20 Asia 2026. The shift represents a fundamental change in how financial institutions must approach security, moving away from static rule-based systems toward continuous, AI-powered defenses that can adapt in real time.

How Have AI Deepfakes Become the Dominant Fraud Method?

Just two years ago, deepfake attacks were barely distinguishable from genuine content. By 2025, they had become the dominant attack method, with virtually all identity fraud attempts now using AI-generated imagery. What makes this shift particularly dangerous is that modern deepfakes embed what experts call "adversarial noise," a data science technique specifically designed to defeat automated fraud detection systems.

"Not only are they using an AI model, they've got a data science team behind them, intentionally knowing what detection techniques are happening and developing techniques to evade that computer vision detection model. These are full-blown industrial parks," stated Niki Luhur, chief executive officer of VIDA Digital Identity.

Niki Luhur, CEO at VIDA Digital Identity

The sophistication of these operations reveals a troubling reality: cybercriminals are no longer working in isolation. They employ dedicated data science teams that study how fraud detection works and deliberately engineer attacks to bypass those systems. This level of organization and investment suggests these are not casual criminal enterprises but rather sophisticated business operations with significant resources.

What Is the Human Cost of Industrial-Scale Fraud?

Beyond the financial losses, the human toll of these operations has become deeply alarming. Reports indicate that people are being trafficked across Southeast Asian borders and forced into slave labor within scam compounds. Individuals are reportedly lured with fake job offers, transported across borders, and held captive to conduct fraud operations. This transformation of cybercrime into a human trafficking enterprise represents a new dimension of the threat that extends far beyond financial loss.

The scale of these operations has grown so large that they now rival traditional organized crime in both revenue and operational complexity. Yet unlike drug trafficking, which has received decades of law enforcement attention and international cooperation, AI-powered fraud remains largely uncoordinated in its response across jurisdictions.

How to Strengthen Your Organization Against AI-Powered Fraud

  • Integrate Security Systems: Connect KYC (Know Your Customer) teams, onboarding systems, authentication platforms, and transaction monitoring tools so they share real-time data rather than operating in isolated silos, allowing detection of suspicious patterns across the entire customer lifecycle.
  • Upgrade from Static Rules: Replace legacy fraud detection systems built on engineered static rules with continuous, AI-driven defenses that can adapt to new attack methods as they emerge, rather than waiting for rule updates.
  • Implement Human Oversight: Maintain human review in the fraud detection loop, as AI systems cannot grasp intent or adapt to regional context without human input, such as understanding that utility bills in certain regions commonly carry advertisements.
  • Conduct Continuous Vulnerability Testing: Use existing large language models (LLMs) against publicly available cybersecurity frameworks to expose vulnerabilities in your systems at a fraction of the cost and time of traditional manual penetration testing.

Financial institutions face a critical choice: continue relying on outdated, static security rules that criminals have learned to circumvent, or invest in integrated systems with continuous AI-driven defenses. The experts at Money20/20 Asia were emphatic that the current approach is insufficient.

Why Are Financial Institutions Particularly Vulnerable?

A structural flaw that criminals actively exploit is the siloed nature of security infrastructure within most financial institutions. Different teams and systems rarely communicate with each other, creating gaps that sophisticated attackers can exploit. When a customer's face, device, biometrics, and transaction history are not connected in real time, fraud can slip through the cracks.

"If you can just connect the time when a customer comes in and the time when money flows out of that person's account and connect their face, their device, and their biometrics, you're going to be in a lot better shape. You're going to solve most of your problems by doing something that's honestly relatively simple. Not easy, but simple," explained Niki Luhur.

Niki Luhur, CEO at VIDA Digital Identity

The problem is compounded by the fact that AI has dramatically lowered the barrier to entry for social engineering fraud. As Carolyn Fox, director of Trust and Safety at TELUS Digital, noted, criminals no longer need massive factories of people. A single operator with access to AI tools can generate convincing phishing emails, voice clones, and deepfake videos at scale.

What Role Does Regulatory Oversight Play?

Experts argue that vague regulatory frameworks have failed to keep pace with the threat. Principles-based oversight that simply requires systems to be "safe" without defining what safety means has proven ineffective. Some regulators are now taking a more prescriptive approach.

The Philippines' Anti-Financial Account Scamming Act (AASA) mandates transaction monitoring and behavioral analytics across all financial institutions and fintechs. Singapore's Monetary Authority requires continuous penetration testing and vulnerability assessments. These prescriptive regulations force organizations to implement specific security measures rather than leaving implementation to interpretation.

Meanwhile, AI safety itself has become a cybersecurity issue that extends beyond fraud. Modern AI systems are now targets, tools, and attack surfaces simultaneously. They can be attacked directly through prompt injection, data poisoning, and model extraction. They can also be weaponized to commit fraud, phishing, and impersonation at industrial scale. The National Institute of Standards and Technology (NIST) now treats secure AI, AI-enabled defense, and AI-enabled attacks as distinct but intertwined cybersecurity problems.

The convergence of these threats means that organizations can no longer treat AI as merely a productivity layer. Instead, they must treat it as a cyber system requiring identity controls, software assurance, logging, procurement scrutiny, model evaluation, human approval gates, and incident response planning. The stakes have never been higher, and the time to act is now.