India's AI Governance Shift: Why the Government Is Abandoning Its 'Light-Touch' Approach
India is abandoning its hands-off approach to artificial intelligence regulation and moving toward a more structured governance framework. A six-member Technology and Policy Expert Committee (TPEC), formed on April 13, is preparing recommendations that could fundamentally reshape how the country oversees AI development and deployment. This marks a significant departure from the "light-touch" regulatory stance that has defined India's AI policy since the IndiaAI Mission launched.
The shift reflects growing alarm over emerging risks posed by increasingly capable AI systems. Recent controversies involving explicit AI-generated content on Grok, the chatbot integrated with X, combined with the launch of advanced models like Anthropic's Claude Mythos, have prompted government officials to reconsider whether existing laws are sufficient to protect critical infrastructure.
What's Driving India's Regulatory Rethink?
India's previous approach relied on existing legal frameworks, particularly the Information Technology Act and sector-specific rules, rather than creating a dedicated AI law. However, officials now acknowledge that this strategy may be inadequate for the current technological landscape. One government official told reporters that "the current approach being considered for various sectors by the TPEC could be different from the light-touch governance guideline that the PSA's committee had recommended".
The primary concern centers on how advanced AI systems could compromise critical sectors. Another official explained the urgency: "When you see the capabilities that a model like Mythos brings, it becomes clear that this can put various critical sectors at risk, such as financial services or energy. All options will be kept open, including the eventual possibility of an AI law across sectors".
Beyond deepfakes and harmful content, experts are flagging foundational security risks. Vinayak Godse, chief executive of the Data Security Council of India (DSCI), noted that "a lot of India's digital systems run on infrastructure that can be potentially made vulnerable by unprecedented cyber attack capabilities that foundational models have demonstrated". He further warned that multiple AI systems, including open-source models designed to identify and exploit vulnerabilities, could create severe cascading risks for critical infrastructure.
How Is India Restructuring Its AI Governance?
To address these concerns, the government has established two complementary bodies working in tandem. The TPEC, chaired by Electronics and IT Secretary S. Krishnan, includes experts from leading institutions and industry groups. Alongside it operates the AI Governance and Economic Group (AIGEG), a 10-member inter-ministerial committee chaired by Union IT Minister Ashwini Vaishnaw.
The division of labor is deliberate. The AIGEG will lead India's overall AI policy direction and coordinate efforts across government ministries, while the TPEC will provide specialized technical and policy expertise, turning complex issues into practical regulatory recommendations. Both bodies were constituted as standing committees with no fixed deadline for submitting their first reports, suggesting a methodical approach to policy development.
Industry stakeholders recognize the need for a comprehensive approach. Ashish Aggarwal, vice-president of public policy at Nasscom, stated: "There is now broad alignment that AI cannot be approached only through a sectoral lens. As a horizontal technology, it cuts across industries, and while sector-specific regulations will continue to apply, the focus is now on ensuring a coherent and specialized approach to AI governance as a whole".
Steps India Is Taking to Strengthen AI Oversight
- Content Labeling Requirements: The government amended the Information Technology Rules in February 2026 to require platforms to clearly label synthetically generated content, bringing AI-generated material within regulatory scope for the first time.
- Faster Compliance Timelines: Proposed amendments to the IT Act would require intermediaries like Google, Meta, and X to remove unlawful content within three hours of receiving government directions, compared to the previous standard.
- Platform Accountability: The government has flagged X's lack of responsiveness, noting that the platform submitted formal responses to only 13 out of 94 government intimations issued between 2024 and 2026, signaling enforcement pressure ahead.
- Judicial Scrutiny: The Gujarat High Court issued notices to Meta, Google, X, Reddit, and Scribd in response to a petition seeking stronger regulatory frameworks to curb deepfake content, with hearings scheduled for May 8.
Experts emphasize that India needs a multi-layered approach to governance. Vinayak Godse proposed a "tri-model approach that addresses current challenges swiftly, prepares for what comes in the next 12 months, and systematically imagines what would come in the next two to three years". This framework acknowledges that AI capabilities are evolving rapidly and regulatory strategy must adapt accordingly.
Vinayak Godse
The policy rethink also reflects a broader global trend. While the Trump administration in the United States has promoted a "light-touch" regulatory posture emphasizing industry-led standards, other jurisdictions are moving in the opposite direction. Connecticut's Senate recently passed one of the nation's most expansive AI regulatory frameworks, imposing detailed compliance obligations on employers using AI in hiring and employment decisions. This divergence suggests that countries and states are increasingly tailoring AI governance to their specific risk profiles and values.
India's shift is particularly significant because it signals that even governments initially skeptical of prescriptive AI regulation are reconsidering their stance as real-world harms emerge. The government has not yet committed to a standalone AI law, but officials have explicitly stated that "all options will be kept open," indicating that a comprehensive legislative framework remains a possibility.
The TPEC and AIGEG are expected to seek broad-based inputs from industry, academia, and civil society before finalizing recommendations. This consultative approach may help balance innovation incentives with safety and security concerns, though it also means that India's final regulatory framework may take several more months to crystallize.