Why the EU's AI Regulation Is Missing Half the Picture: The Gender Equality Gap

Gender bias in artificial intelligence isn't a technical glitch waiting for a software patch; it's a structural problem built into how AI systems are designed, trained, and deployed. As the EU AI Act moves toward full implementation in August 2026, researchers are raising an urgent alarm: the regulation doesn't go far enough to protect women from algorithmic discrimination. According to experts at the Inclusive AI Lab at Utrecht University, gender equality must be embedded across the entire AI lifecycle, from the data used to train models to the institutional decisions about how those systems get used in the real world .

How is gender bias actually built into AI systems?

The problem starts with data. Much of the world's historical data has been built around a white male default, which means AI systems trained on that data inherit those biases. In medical research, for example, diagnostic models trained primarily on male-centered data have produced higher misdiagnosis rates for women, particularly women from minority backgrounds. The issue compounds with intersectionality: facial recognition systems trained on datasets with disproportionately light-skinned male faces systematically misidentify women of color at higher rates .

But the problem goes deeper than just missing data. Sometimes women aren't misrepresented in datasets; instead, they're penalized because their life trajectories don't match the assumed norm. An AI recruitment tool trained on male career patterns might interpret caregiving gaps as lower productivity, automating an inequality it was never designed to question. Additionally, according to UN Women, 80% of gender-related Sustainable Development Goal indicators globally lack complete data, meaning AI systems built on those incomplete datasets inherit blind spots about unpaid care work, digital access gaps, and gender-based violence .

"Gender bias in AI is not a technical problem; it's a governance issue. And if the EU AI Act is to protect fundamental rights in practice, gender equality must be embedded across the AI lifecycle, not treated as an afterthought," explained Weijie Huang, a researcher at the Inclusive AI Lab at Utrecht University.

Weijie Huang, Researcher at the Inclusive AI Lab, Utrecht University

What's the biggest gender-related AI harm that regulators are ignoring?

One of the most underacknowledged governance failures in the AI debate involves deepfake technology. Research conducted by the Inclusive AI Lab in collaboration with the Google Safety and Security team examined how deepfake-related harms affect women and girls in the Global South. Much of the regulatory debate around deepfakes focuses on political disinformation and election interference, but that's not where the majority of deepfake harm is actually occurring. The largest offender today is gendered violence, particularly non-consensual synthetic intimate imagery .

Three stages explain why women are particularly vulnerable to deepfake harm. At the production stage, the gender imbalance in the AI sector means that safety features protecting women from digital harm aren't a priority. At the amplification stage, algorithms accelerate the spread of harmful content for profit, with liability remaining largely unclear. At the commodification stage, women's digital identities, faces, and bodies are treated as data assets in a market that extracts value from women's images. This isn't accidental; it's embedded in design choices, platform incentives, and economic models .

Steps to Embed Gender Equality Into AI Governance

  • Input Layer: Build strong civil society networks, involve feminist researchers, and create governance structures that protect collective rights rather than only individual ones, ensuring diverse perspectives shape AI from the start.
  • Process Layer: Test policies and models with real communities, especially those most affected by AI systems, and apply design justice principles that ask who builds AI, who benefits, and who bears the risk.
  • Purpose Layer: Center empowerment by providing tools, digital literacies, and survivor-centered reporting systems so communities gain agency over the AI systems shaping their lives.

Researchers at the Inclusive AI Lab have developed the Gender AI Safety Framework, a practical governance roadmap organized across three interconnected layers designed as a continuous, iterative cycle rather than a linear checklist. This framework recognizes that gender equality in AI governance requires intervention at every stage, not just at the content level when harm has already occurred .

The stakes are significant. With high-risk AI systems already being used in employment screening, healthcare, migration assessment, and education access, the governance challenges are threefold: fundamental rights exposure if bias is embedded upstream and difficult to detect; public trust erosion if citizens perceive that automated systems reproduce structural inequality; and regulatory fragmentation risk if EU Member States interpret gender safeguards differently across high-risk implementations .

Importantly, a comparative review of regulatory approaches across the EU, United States, China, and parts of the Global South revealed that existing frameworks tend to prioritize quantifiable or geopolitically legible harms, leaving complex social harms, including technology-facilitated gender-based violence, under-addressed. However, communities around the world are building their own solutions. In Senegal, the I Am the Code Foundation trains girls and young women in coding and digital skills with the explicit goal of making them producers of data and designers of systems. In Indonesia, digital financial platforms have adopted alternative credit indicators drawing on community participation and everyday behavior patterns. In Pakistan, the Digital Rights Foundation's cyber harassment helpline has documented over 20,000 cases, demonstrating that making gendered harm legible to regulators is both possible and necessary .

As the EU moves toward implementing the AI Act, the message from researchers is clear: gender equality cannot be an afterthought. It must be woven into how AI systems are built, tested, deployed, and governed. Without that fundamental shift, the regulation risks automating the very inequalities it was designed to prevent.

" }