OpenAI's GPT-4 Was Secretly Tested on Indians First, New Investigation Reveals

Microsoft quietly deployed GPT-4 to real users in India in 2022 through its Bing search engine without their knowledge, according to an investigation published by The New Yorker. The undisclosed test bypassed safety protocols that OpenAI and Microsoft had jointly established, and the deployment was not initially disclosed to OpenAI's leadership .

How Did This Happen Without Oversight?

The investigation, conducted by journalists Ronan Farrow and Andrew Marantz and based on more than 100 interviews and internal documents, reveals that Microsoft proceeded with the India deployment despite OpenAI's advice against it. The test circumvented a joint safety board created specifically to review AI systems before release .

According to the report, OpenAI's leadership was not properly informed about the deployment. Board member Tasha McCauley learned about it informally after a board meeting, while CEO Sam Altman did not mention the deployment during the session. Researcher Jacob Hilton described the incident as having been "kind of completely ignored" .

Jacob Hilton

Initially, Microsoft denied that GPT-4 was involved in the India tests. Spokesperson Frank Shaw told The New York Times that the model was not used. However, following publication of The New Yorker investigation, the company confirmed that GPT-4 had indeed been tested on Bing in India .

Why Does This Matter for AI Deployment?

The revelation comes at a critical moment for artificial intelligence governance. India has emerged as a major market for AI tools, with approximately 180 million monthly users of ChatGPT and rapid growth in generative AI app downloads . Using an entire country's population as an unknowing test market raises significant ethical questions about informed consent and the responsibility of AI developers to disclose testing activities.

The incident highlights a gap between the safety protocols that major AI companies establish and their actual implementation. When companies create oversight boards and safety review processes, the expectation is that these mechanisms will be followed consistently. The India deployment suggests that commercial pressures or internal disagreements can override these safeguards .

Steps for Responsible AI Testing and Deployment

  • Transparent Disclosure: Companies should publicly announce when and where they are testing new AI models, especially when real users are involved, rather than conducting secret deployments.
  • Safety Board Authority: Joint oversight boards between companies should have binding authority over AI deployments, not merely advisory roles that can be bypassed.
  • Leadership Accountability: Executive leadership must be formally informed and must explicitly approve major AI deployments, with documentation of that approval.
  • Regional Considerations: Testing in developing markets should include consultation with local regulators and stakeholders, not just internal company decisions.

Sam Altman has since acknowledged the importance of AI governance. On February 19, 2026, he spoke at an AI summit in New Delhi, calling for the creation of an international body to regulate artificial intelligence . This statement came after the India testing incident had already occurred, raising questions about whether the experience influenced his public position on regulation.

The investigation underscores a broader tension in the AI industry between rapid innovation and responsible deployment. While companies like OpenAI and Microsoft have created safety mechanisms, the India case demonstrates that these mechanisms can be circumvented when business interests diverge from safety protocols. As AI models become more powerful and are deployed to billions of users globally, the stakes of such oversights increase significantly.