The EU's AI Act Has a Troubling Export Problem: How European Rules Are Funding Surveillance Abroad

The European Union's ambitious AI Act sets strict rules for high-risk artificial intelligence systems at home, yet a new report exposes a stark contradiction: the EU is simultaneously funding and exporting those same high-risk AI tools to countries in the Middle East, North Africa, and Palestine, where they enable surveillance, repression, and human rights violations.

What Is the EU Funding Through Its AI Export Programs?

A comprehensive report from 7amleh, a Palestinian digital rights organization, reveals how European policies have financed and exported advanced digital surveillance technologies used to monitor, control, and suppress populations across the SWANA region (Southwest Asia and North Africa) . The research identifies three primary pathways through which these technologies reach the region.

  • Migration Control Funding: The EU provides direct financial support to governments in the region to deploy advanced surveillance technologies, including biometric identification systems and risk-analysis tools designed to monitor and restrict population movement.
  • Research and Innovation Grants: European scientific research and innovation grants fund Israeli weapons and technology companies that develop AI-based tools for military operations and surveillance systems targeting civilian populations.
  • Direct Technology Exports: European technology companies export high-risk AI systems such as facial recognition, biometric scanning, smart city infrastructure, and digital surveillance networks directly to countries across the region.

These exports represent a fundamental disconnect between the EU's regulatory ambitions and its actual practices. The AI Act, which took effect in 2024, classifies facial recognition, biometric surveillance, and predictive security systems as high-risk AI applications that require strict safeguards, transparency, and human oversight within EU borders . Yet the same technologies are being deployed abroad with minimal accountability or human rights protections.

How Does This Contradict the EU's Stated Values?

The 7amleh report argues that the EU's approach reveals a selective application of human rights standards. The organization stated that while the EU manufactures consent for regulation domestically, it simultaneously profits from and enables the expansion of surveillance and repression capacities in other regions . This pattern reflects what researchers describe as a longer arc of selective human rights application by the EU, where protections remain applicable only to some populations, not others.

The research also uncovers a broader network of actors enabling this surveillance infrastructure. Access Now has documented the close relationships between private technology companies and EU institutions in migration policy, revealing how the technology, security, and big data industries shape the EU's approach to security and migration . Additionally, universities across Europe are embedded in migration and border regimes, contributing to what researchers call a growing border-industrial-academic complex.

Ways to Understand the Systemic Issues Behind EU AI Export Practices

  • Profit Incentives Override Rights Protections: The political elites and capitalist class continue to manufacture consent for increased surveillance investments while fundamentally profiting from the expansion of control systems, with violations intensifying against marginalized, racialized, trans, working-class, and migrant populations.
  • Regulatory Gaps Enable Dual Standards: None of the EU's legal mechanisms can ensure meaningful protection for all populations, by design. The AI Act's provisions do not extend extraterritorially, allowing companies to export systems that would be prohibited domestically.
  • Institutional Complicity Across Sectors: The surveillance infrastructure stretches across private companies, government institutions, and academic organizations, creating a complex ecosystem where responsibility is diffused and accountability is minimal.

The implications are significant. These high-risk AI systems are not merely theoretical concerns; they directly exacerbate the expansion of surveillance and repression capacities of governments and other actors against the populations they govern . The transfer of facial recognition, biometric identification, and predictive security systems enables mass monitoring, discriminatory enforcement, and human rights abuses on a scale that would be unacceptable within the EU itself.

What Happens Next as the EU Implements Its AI Act?

The EU is currently navigating the practical implementation of the AI Act, with enforcement mechanisms still being finalized. The European Commission's supervision and enforcement powers against general-purpose AI model providers commenced on August 2, 2026, giving regulators new tools to oversee AI systems within EU borders . However, the report raises urgent questions about whether these enforcement mechanisms will address the export of high-risk systems or remain focused solely on domestic applications.

Meanwhile, the EU Parliament has adopted positions on AI Act simplification proposals, with 569 votes in favor, 45 against, and 23 abstentions . These proposals delay rules for high-risk AI systems to allow implementation guidance and standards preparation, with fixed application dates of December 2, 2027 for high-risk systems and August 2, 2028 for systems covered by EU sectoral safety legislation. The Parliament also introduced a ban on "nudifier" systems that create or manipulate sexually explicit images without consent, demonstrating the EU's willingness to address specific harms when politically motivated.

The disconnect between these domestic protections and international exports suggests that the EU's AI governance framework remains incomplete. As the bloc refines its regulatory approach, civil society organizations are calling for an internationalist movement to fight for human rights protections that extend beyond European borders, ensuring that the principles embedded in the AI Act apply equally to all populations affected by European technology exports .

" }