India has emerged as the second-most targeted country for AI-powered scams globally, according to Meta's First Half 2026 Adversarial Threat Report, with criminal syndicates exploiting digital payment expansion and low digital literacy to defraud millions. The findings paint a concerning picture of how artificial intelligence is lowering barriers for fraudsters while making detection increasingly difficult for both platforms and users. Why Are Indian Users So Vulnerable to AI-Powered Scams? India's vulnerability to sophisticated online fraud stems from a unique combination of factors that make the country an attractive target for international scam networks. According to the Meta report, criminal scam syndicates most frequently targeted English-speaking users across the United States, followed closely by users in India. The reasons are multifaceted and deeply rooted in India's digital transformation. Senior advocate Krishna Grandhi explained that India's large pool of potential victims with moderate income, combined with cheap data and the expansion of digital payments, creates ideal conditions for fraud. "The widespread availability of affordable smartphones and low-cost mobile data has dramatically increased internet penetration, enabling fraudsters to reach potential victims through social media messages, phone calls and messaging platforms," Grandhi noted. Additionally, lower digital literacy levels mean many users lack the knowledge to recognize sophisticated scams, even when they employ advanced AI techniques. One particularly alarming scam highlighted in the report is the "digital arrest" scheme, which relies on phone and video calls where actors impersonating law enforcement officials frighten victims into transferring money. These scams exploit psychological vulnerabilities and cultural trust in authority figures, making them especially effective in India's context. How Is Generative AI Making Scams More Convincing and Harder to Detect? The integration of generative AI tools into fraud operations has fundamentally changed the threat landscape. Grandhi emphasized that "generative AI has lowered the barrier to entry for scammers and increased the quality and believability of their schemes". What once required significant technical expertise and resources now can be accomplished by criminals with basic AI knowledge and access to publicly available tools. Fraudsters are leveraging AI in several sophisticated ways to deceive victims and evade detection systems. These tactics include: - Synthetic Identity Creation: Generative AI tools enable fraudsters to create fake online identities with realistic profile photos, detailed backstories, and convincing communication styles that are difficult for ordinary users to distinguish from legitimate accounts. - Hyper-Personalized Lures: AI systems analyze victim data to generate customized fraud messages that incorporate cultural nuances and personal details, making scams feel personally relevant and trustworthy. - Multi-Turn Conversations: AI enables natural language interactions that mimic human communication patterns, allowing scammers to build rapport over multiple exchanges before requesting money or sensitive information. - Deepfake Video Deployment: Fraudsters use deepfake technology to create fake videos that evade platform likeness detection systems, enabling them to impersonate trusted individuals or authority figures in video calls. The sophistication of these AI-powered attacks means that traditional security awareness training and basic platform safeguards are increasingly insufficient. Victims often cannot distinguish between real and AI-generated content, even when they are cautious. What Role Do Cross-Border Crime Networks Play in Targeting India? Another critical challenge is the international nature of scam operations targeting Indian users. Many criminal networks operate from large-scale scam centers located in Southeast Asia, particularly in regions of Myanmar, Cambodia, and Laos. These organized operations are not static; they constantly shift geographies while simultaneously refining their fraud narratives to stay ahead of law enforcement and platform enforcement efforts. Advocate Prashant Mali described the Meta report findings as a "clarion call" for stronger legal safeguards against rapidly evolving cybercrime networks. The challenge for law enforcement is that these cross-border networks are extremely difficult to dismantle due to limited jurisdiction, fragmented governance across countries, different legal frameworks, lack of extradition agreements, and limited investigative capacity. This creates a situation where criminals can operate with relative impunity, knowing that pursuing them across borders is legally and logistically complex. How Can Organizations and Individuals Protect Against AI-Powered Deepfake Fraud? Defending against AI-enabled fraud requires a multi-layered approach that combines technology, training, and regulatory action. Security experts recommend implementing comprehensive safeguards across several dimensions: - Real-Time Detection Technology: Deploy advanced deepfake detection software that analyzes video and audio streams frame-by-frame to identify inconsistencies such as out-of-sync blinks, audio lip-sync errors, and other artifacts of AI-generated content. Solutions like Identy.io's mobile biometric platform integrate on-device deepfake detection alongside passive liveness checks to verify whether users are interacting with real people. - Employee Training and Simulations: Organizations should conduct deepfake phishing simulation training that safely rehearses AI-powered voice and video social engineering attacks. This specialized training transforms employees into an informed last line of defense by teaching them to recognize AI-generated voice and video lures before they cause damage. - Compliance with International Standards: Implement detection solutions that meet established security standards such as ISO/IEC 30107 for liveness detection, NIST 800-63-3 for digital identity, and FIDO Alliance Face Verification certification, which explicitly tests for deepfake and spoof resilience. - Privacy-First Architecture: Choose identity verification systems that process biometric data on-device rather than transmitting it to remote servers, reducing breach risk and meeting privacy-by-design expectations under regulations like GDPR and PSD2. Grandhi pointed to India's recent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which introduce a statutory definition of "deepfake" and require faster takedown of objectionable content reported to social media platforms. He noted that such regulatory steps, combined with technological solutions like digital watermarking and AI-based detection systems, could help platforms identify manipulated media and curb the spread of harmful content. Beyond technology, Mali suggested that certain organized cybercrime groups may warrant classification under anti-terror frameworks to enable stronger cross-border enforcement. "Designating these groups as terrorists underscores the need for cross-border prosecutions, as seen in disruptions of Mexico-focused meth sales and cross-US-Mexico fentanyl rings, actioning thousands of accounts under Dangerous Organizations policies," he explained. What Immediate Steps Should Policymakers Take? Experts stress that India must invest significantly in digital literacy programs as a long-term defense against online fraud. For policymakers, Grandhi recommends updating the Information Technology Act and criminal code to specifically cover AI-assisted fraud and synthetic identity theft, ensuring that legal frameworks keep pace with technological evolution. The scale of the problem is substantial. Meta removed over 10.9 million Facebook and Instagram accounts, 600,000 Facebook Pages, and 112,000 Ad Accounts in 2025 for violating policies against fraud, scams, deceptive practices, and dangerous organizations. Despite these enforcement efforts, the sophistication and volume of attacks continue to grow, underscoring the need for ecosystem-wide accountability from app stores, advertisers, and technology platforms themselves. As India continues its digital transformation, the convergence of large user populations, expanding digital payment systems, and AI-powered fraud tools creates an urgent imperative for action. Without coordinated efforts across technology, law enforcement, regulation, and public education, the country's digital economy and user trust remain at significant risk.