Why AI Regulation Is Failing Women: The Gender Gap Nobody's Talking About
The EU is banning deepfake pornography and non-consensual intimate imagery created by AI, but regulators are treating gender-based AI violence as a content problem rather than a design problem. This fundamental mismatch means that even as new laws take shape, the systems creating harm at scale remain largely unexamined .
The urgency is real. German lawmakers recently called for a ban on deepfake pornography after two prominent actors revealed details of the AI-enabled violence they suffered. Denmark introduced a law protecting people's voices and appearances as intellectual property. At the EU level, the European Parliament proposed amendments to the AI Act to specifically ban AI "nudifier" apps used to create sexually explicit images without consent, largely targeting women .
Yet these reactive measures mask a deeper structural problem: gender inequality is not embedded in AI governance frameworks at all. The EU's AI Act mentions gender equality in passing, but it does not acknowledge gendered power structures as a fundamental influence on how AI systems are designed, trained, and deployed. The Gender Equality Strategy, adopted recently, missed an opportunity to call for gender mainstreaming in the Act's implementation, particularly given that implementation and standard-setting processes are still ongoing .
Why Current AI Laws Miss the Real Problem?
The distinction matters enormously. Gender-based digital violence like harassment, stalking, and doxing predates widespread AI and is broadly covered by existing legislation. AI-generated harmful content is qualitatively different: it manufactures harm at scale and lowers barriers to real-life violence by exploiting grey areas that existing instruments are not equipped to address .
The EU's approach anchors AI oversight in the Digital Services Act (DSA), the platform moderation toolkit. This means the current framework focuses on treating the outcomes of AI-facilitated gender violence rather than preventing it upstream. Two years after the DSA entered into force, six member states still lack trusted flaggers, independent bodies tasked with alerting platforms to illegal content. While the DSA can enable the removal of deepfakes, stopping them at the source means governing AI throughout its lifecycle, from data collection to model development, training, and deployment .
The AI Act Code of Practice on Transparency, which governs the labeling of AI-generated content, could better address gender equality, but the second draft does not mention gender at all. Similarly, the risk taxonomy under the Code of Practice on General-Purpose AI fails to classify gender-based discrimination and gender-based violence as systemic risks. While model providers must consider discrimination and fundamental rights impacts, they themselves are responsible for deciding, on a case-by-case basis, whether these impacts pose systemic risk, with no external review until the EU AI Office reviews the Model Report .
How to Build Gender-Sensitive AI Governance?
- Embed Gender in Standard-Setting: Gender-sensitive standards should introduce audit methodologies to check for bias in AI systems, in line with the EU Charter on Fundamental Rights. Current standard-setting procedures are developing shared methodologies for dataset quality and governance to detect bias, yet their implementation remains uncertain.
- Require Representative Data and Intersectional Assessments: High-quality, representative data norms and intersectional impact assessments, beyond narrowly defined "high-risk" applications, may help reduce AI-facilitated gender violence upstream. The Common European Data Spaces recognizes this need, but execution lags behind policy.
- Strengthen Transparency and Contestability: Unlike the General Data Protection Regulation, which is rooted in human rights law, the AI Act is grounded in product safety legislation, limiting its capacity to shape societal outcomes. Addressing this gap requires stronger transparency measures, allowing users and regulators to challenge AI-generated content or decisions that perpetuate discrimination before harm occurs.
- Address Algorithmic Amplification of Misogyny: AI-driven recommender systems reward attention and create echo chambers of misogyny. Broader equality policies that shape the societal context in which technology develops are essential alongside upstream interventions in AI and data governance.
The scale of the problem extends beyond deepfakes. An estimated 29% of female politicians are victims of cyberviolence. Attitudinal backsliding on gender equality is accelerating: 30% of EU citizens believe women should accept sexist or abusive online responses, while 25% believe women exaggerate claims of rape or abuse. These trends are boosted by AI-driven recommender systems that reward attention and create echo chambers of misogyny .
"Gender-based digital violence disproportionately affects women, particularly female politicians, raising important policy questions: how can such harms be mitigated effectively? Further, how can new technologies avoid perpetuating violence against women in the first place?" noted the European Policy Centre in its analysis of AI governance gaps.
European Policy Centre, Health and Social Resilience Programme
The EU's Directive on Combating Violence against Women, the Victims' Rights Directive, the Anti-Trafficking Directive, and the Action Plan Against Cyberbullying provide a starting point. In 2024, the Violence Against Women Directive criminalized non-consensual AI-generated deepfakes. The Anti-Trafficking Directive acknowledges how digital tools are used to recruit, advertise, and control victims, risks that AI intensifies. The Action Plan Against Cyberbullying recognizes the growing threat posed by generative AI in the context of gendered violence .
But these frameworks operate in silos. Without gender mainstreaming across the AI Act's implementation, standard-setting, and enforcement, the technology will continue to reproduce and amplify gender inequalities by design. The window to fix this is closing. As AI systems become more powerful and more widely deployed, the structural biases embedded in them become harder to unwind. Regulators must move beyond treating gender-based AI violence as a content moderation problem and recognize it as a fundamental governance challenge that requires rethinking how AI systems are built, trained, and deployed from the ground up.