The EU's Hidden Gender Problem in AI: Why Women Are Missing From the Data That Powers Tomorrow's Systems

Gender bias in artificial intelligence is not a bug waiting to be fixed; it is a structural feature built layer by layer across entire AI systems, from the data used to train models to the design choices made by developers. As the EU AI Act moves toward full implementation in August 2026, experts are raising urgent alarms about how gender inequality is being automated into high-risk systems already deployed in employment screening, healthcare, migration assessment, and education access .

Why Is Gender Bias in AI Systems So Hard to Detect?

Weijie Huang, a researcher at the Inclusive AI Lab at Utrecht University, frames the problem clearly: gender bias in AI is fundamentally a governance issue, not a technical one. The problem unfolds across multiple interconnected layers that compound over time. Historically, much of the world's data has been built around a white male default. In medical research, for example, diagnostic models trained primarily on male-centered data have been shown to produce higher misdiagnosis rates for women, particularly women from minority backgrounds .

The challenge becomes even more complex when intersectionality enters the picture. Large-scale datasets used to train facial recognition systems have disproportionately included light-skinned male faces, leaving women of color statistically marginalized and therefore systematically less accurately identified. This inherited bias is not intentional discrimination, but the consequences are just as harmful .

A third dimension concerns what researchers call structural absence. Women are not always misrepresented in the data, but are sometimes penalized because their life trajectories do not match the assumed norm. An AI recruitment tool trained on male careers may interpret caregiving gaps as lower productivity, automating an inequality it was never designed to question. When women are missing, misrepresented, or misunderstood in the data, inequality becomes automated, and once it becomes automated, it becomes harder to see and harder to contest .

What Are the Real-World Harms Beyond Hiring and Healthcare?

One of the most underacknowledged governance failures in the AI debate is the disproportionate impact of deepfake technology on women. Research conducted by the Inclusive AI Lab in collaboration with the Google Safety and Security team examined how deepfake-related harms are experienced by women and girls in the Global South. Much of the regulatory debate around deepfakes focuses on political disinformation and election interference, but these are not where the majority of deepfake harm is actually happening .

The largest offender today is gendered violence, particularly non-consensual synthetic intimate imagery. This gap between statistical reality and regulatory focus is itself a governance failure. Three stages explain why women are particularly vulnerable to deepfake harm:

  • Production Stage: The gender imbalance in the AI sector means that safety features protecting women from digital harm are not a priority in system design.
  • Amplification Stage: Algorithms accelerate the spread of harmful content for profit, with liability remaining largely unclear and unaddressed.
  • Commodification Stage: Women's digital identities, their faces, and their bodies are treated as data assets in a market that has found a way to extract value from women's images.

Gendered harm is not accidental. It is embedded in design choices, platform incentives, and economic models. If governance intervenes only at the content level, it is already too late; lifecycle governance must intervene upstream .

How Can the EU Embed Gender Equality Across the AI Lifecycle?

Huang introduced the Gender AI Safety Framework developed at the Inclusive AI Lab, a practical governance roadmap organized across three interconnected layers that work as a continuous, iterative cycle rather than a linear checklist:

  • Input Layer: Establish strong civil society networks, recruit feminist researchers, and build governance structures that protect collective rights rather than only individual ones.
  • Process Layer: Test policies and models with real communities, especially those most affected, and apply design justice principles that ask who builds AI, who benefits, and who bears the risk.
  • Purpose Layer: Center empowerment by providing tools, digital literacies, and survivor-centered reporting systems so that communities gain agency over the AI systems shaping their lives.

The framework recognizes that AI systems trained in the EU and US carry embedded assumptions that do not travel neutrally across global contexts. If the EU is serious about embedding gender equality in AI governance, it must reckon with how its regulation interacts with global data .

Global examples demonstrate that communities are already building their own solutions. In Senegal, the I Am the Code Foundation trains girls and young women in coding and digital skills with the explicit goal of making them producers of data and designers of systems. In Indonesia, digital financial platforms have adopted alternative credit indicators drawing on community participation and everyday behavior patterns. In Pakistan, the Digital Rights Foundation's cyber harassment helpline has documented over 20,000 cases, demonstrating that making gendered harm legible to regulators is both possible and necessary .

"Gender bias in AI is not a technical problem; it's a governance issue. And if the EU AI Act is to protect fundamental rights in practice, gender equality must be embedded across the AI lifecycle, not treated as an afterthought," explained Weijie Huang.

Weijie Huang, Researcher at the Inclusive AI Lab, Utrecht University

The stakes for Europe are threefold. First, there is fundamental rights exposure if bias is embedded upstream and difficult to detect. Second, there is public trust erosion if citizens perceive that automated systems reproduce structural inequality. Third, there is regulatory fragmentation risk if EU member states interpret gender safeguards differently across high-risk implementations .

As the EU AI Act moves into its implementation phase, the window to embed gender equality into these systems is closing rapidly. The question is not whether gender bias exists in AI; the evidence is overwhelming. The question now is whether Europe's regulators and industry leaders will treat it as a governance priority or continue to defer it as a technical afterthought.