Why the Global South Is Being Left Behind in AI Governance: The Inclusion Crisis Nobody's Talking About

As artificial intelligence reshapes economies worldwide, a critical gap is widening between wealthy nations and the Global South, where AI's risks are hitting hardest but voices are barely heard in policy discussions. While 91 countries and international organizations endorsed the New Delhi AI Impact Summit Declaration in February 2026, the real challenge lies in whether these commitments will address the uneven impact of AI on developing economies .

How Is AI Deepening Inequality in Emerging Economies?

The risks facing the Global South are distinct and urgent. Unlike wealthy nations where AI adoption is carefully managed through regulatory frameworks, emerging economies face a different set of pressures. Informal workers, farmers, and communities without robust digital infrastructure are particularly vulnerable to disruption .

  • Job Displacement in Informal Sectors: In countries where large portions of the population work outside formal economic systems, AI-driven automation poses an immediate threat to livelihoods without the safety nets available in developed nations.
  • Deepfakes and Misinformation: Emerging economies lack the content moderation infrastructure to combat AI-generated deepfakes and disinformation, leaving populations more exposed to manipulation.
  • Digital Divide Expansion: Communities without access to digital tools and education are being left further behind as AI becomes central to economic participation.

Maya Sherman, an AI policy researcher and technology diplomat at the Oxford Internet Institute, emphasized this disparity during a recent policy discussion. Sherman noted that governance discussions must include voices from the Global South, as these regions face distinct challenges that Western-centric policy frameworks often overlook .

"AI can unintentionally deepen inequalities in emerging economies," Sherman explained, pointing to how shifts in technology adoption can be particularly disruptive in countries where large portions of the population operate outside formal economic systems.

Maya Sherman, Innovation Attaché at the Israeli Embassy in India and AI Policy Researcher at the Oxford Internet Institute

What Is India's Alternative Approach to AI Regulation?

Rather than adopting strict legislation like the European Union's AI Act, India has pursued what experts call "smart experimentation." This flexible model combines advisory frameworks with targeted policies such as the Digital Personal Data Protection Act, allowing the country to balance innovation with safeguards while observing how global regulatory models evolve .

This approach reflects a broader recognition that one-size-fits-all regulation doesn't work for diverse economies. India's strategy creates space for practical AI applications while building governance capacity over time, rather than imposing rigid rules that might stifle beneficial innovation in developing contexts.

Can AI Actually Help Farmers and Informal Workers?

Despite the risks, AI also offers genuine opportunities for emerging economies if deployed thoughtfully. India has launched AI literacy initiatives targeting farmers and informal workers, demonstrating how technology can address real economic challenges .

Through programs linked with the Global Partnership on AI, farmers have been introduced to practical applications including crop monitoring, weather prediction, and multilingual chatbot support. In a country with extraordinary linguistic diversity, AI-powered translation tools could become a powerful enabler of digital inclusion, allowing workers to access information and markets previously beyond their reach.

Sherman highlighted this potential, noting that countries should invest in sovereign AI models capable of supporting local languages and cultural contexts. Most AI systems today are optimized for English and a handful of major languages, leaving billions of people dependent on tools designed for other markets .

Why Does Linguistic Diversity Matter for Global AI Governance?

One of the most overlooked aspects of AI governance is language. The vast majority of AI training data comes from English-language sources, and most advanced AI models are optimized for English speakers. This creates a structural disadvantage for non-English speaking populations and reinforces existing power imbalances in the global AI ecosystem .

Sherman argued that this is not merely a technical problem but a governance imperative. If AI is to serve humanity equitably, policymakers must prioritize linguistic diversity as a core component of responsible AI development. This means supporting the creation of AI models trained on diverse languages and cultural contexts, ensuring that emerging economies are not passive consumers of AI tools designed elsewhere.

The February 2026 policy roundup revealed that countries are beginning to address content and data governance issues globally. The European Commission established guidelines requiring very large online platforms to provide transparency mechanisms for media service providers and to notify them before removing content . Meanwhile, India introduced mandatory labeling for synthetic content, and Brazil advanced bills protecting minors online . These measures suggest growing recognition that AI governance must address both technological risks and social impacts.

What Does Inclusive AI Governance Actually Look Like?

The challenge ahead is translating the New Delhi AI Impact Summit Declaration's endorsement by 91 countries into concrete action that benefits emerging economies. This requires moving beyond top-down policy frameworks to include voices from the Global South in shaping how AI is developed, deployed, and governed .

Sherman's work demonstrates that effective AI governance combines multiple elements: understanding how technologies can be misused, recognizing the philosophical paradoxes embedded in AI ethics, and designing policies that account for local economic and cultural contexts. For the Global South, this means governance frameworks that protect vulnerable populations while enabling beneficial innovation, rather than simply importing regulatory models from wealthy nations.

The path forward requires what Sherman calls "smart experimentation" at the global level. Rather than waiting for perfect international consensus, countries should be encouraged to pilot different governance approaches, share learnings, and adapt frameworks based on evidence. This is particularly important for emerging economies, where the stakes of getting AI governance wrong are highest and the resources for managing unintended consequences are lowest.