Why the EU Is Treating AI-Generated Gender Violence as a Design Problem, Not Just Content Moderation

The EU is shifting its approach to AI-generated gender violence from treating symptoms to preventing the disease at its source. Recent legislative moves, including Germany's push to ban deepfake pornography and the European Parliament's proposed amendments to the AI Act targeting "nudifier" apps, signal a fundamental change in how policymakers view the problem. Instead of focusing solely on removing harmful content after it spreads, regulators are now asking how AI systems can be designed and governed to prevent such harms from being created in the first place .

What Makes AI-Generated Gender Violence Different from Traditional Cyberharassment?

The distinction matters enormously. Traditional digital gender-based violence, such as harassment, stalking, and doxing, has existed for years and is broadly covered by existing EU legislation. AI-generated harmful content is qualitatively different: it manufactures harm at scale and lowers the barriers to real-life violence by allowing bad actors to exploit grey areas that current legal instruments were never designed to address .

The scale and speed of AI-generated content creation fundamentally changes the problem. A single person can now generate thousands of non-consensual intimate images in minutes, something that would have been impossible just a few years ago. This technological shift demands a corresponding shift in how the EU approaches regulation and governance.

How Is the EU Currently Addressing AI-Generated Gender Violence?

The EU has several existing frameworks that provide a foundation for addressing these harms, though experts argue they remain incomplete:

  • Violence Against Women Directive: In 2024, this directive criminalized non-consensual AI-generated deepfakes, marking the first explicit legal recognition of AI-enabled gender violence.
  • Anti-Trafficking Directive: This framework acknowledges how digital tools are used to recruit, advertise, and control victims, risks that AI intensifies significantly.
  • Action Plan Against Cyberbullying: This recognizes the growing threat posed by generative AI in the context of gendered violence.
  • Digital Services Act: The EU's platform moderation toolkit, which anchors the current approach to treating outcomes of AI-facilitated gender violence.

However, these frameworks focus primarily on treating the outcomes of AI-facilitated gender violence rather than preventing it upstream. Two years after the Digital Services Act entered into force, six EU member states still lack trusted flaggers, independent bodies tasked with alerting platforms to illegal content . This gap highlights the challenge of relying on content moderation alone.

Why Isn't the AI Act Addressing Gender Equality Adequately?

The EU's AI Act, the landmark regulation governing artificial intelligence development and deployment, mentions gender equality but falls short in critical ways. The Act does not acknowledge gendered power structures as a fundamental influence on how AI systems are designed, trained, and deployed, nor does it adequately address AI's societal implications for gender equality .

The AI Act's Code of Practice on Transparency, which governs the labeling of AI-generated content, represents a missed opportunity. The second draft of this code does not mention gender at all. Similarly, the risk taxonomy under the Code of Practice on General-Purpose AI fails to classify gender-based discrimination and gender-based violence as systemic risks. While model providers must consider discrimination and fundamental rights impacts, they themselves decide on a case-by-case basis whether these impacts pose systemic risk, with no external review until the EU AI Office reviews the Model Report .

This structural limitation stems from a fundamental difference in how the AI Act is grounded. Unlike the General Data Protection Regulation, which is rooted in human rights law, the AI Act is grounded in product safety legislation, limiting its capacity to shape societal outcomes and prevent gender-based harms by design.

What Would Upstream Prevention Actually Look Like?

Preventing AI-generated gender violence upstream means governing AI throughout its entire lifecycle, from data collection to model development, training, and deployment. This requires several interconnected approaches that go beyond current regulatory frameworks.

High-quality, representative data is foundational. The EU's Common European Data Spaces initiative recognizes this, but implementation remains uncertain. Representative data norms and intersectional impact assessments, applied beyond narrowly defined "high-risk" applications, may help reduce AI-facilitated gender violence at the source. Standard-setting procedures are developing shared methodologies for dataset quality and governance to detect bias, yet their practical implementation across the industry remains a work in progress .

Stronger transparency measures are also essential. Building explainable AI systems, rather than black-box systems that cannot be understood or challenged, serves broader democratic goals by uncovering the power structures embedded in algorithmic systems. When AI systems can show their work, users and regulators can challenge AI-generated content or decisions that perpetuate discrimination before harm occurs.

How Do Societal Attitudes Amplify the Problem?

The urgency of this issue is underscored by troubling trends in public attitudes toward gender equality. Approximately 30 percent of EU citizens believe women should accept sexist or abusive online responses, while 25 percent believe women exaggerate claims of rape or abuse . These attitudes are not static; they are actively reinforced by AI-driven recommender systems that reward attention and create echo chambers of misogyny.

This dynamic creates a vicious cycle. AI systems trained on data reflecting existing gender biases amplify those biases through recommendation algorithms, which in turn reinforce misogynistic attitudes in the broader population. Breaking this cycle requires addressing both the technical design of AI systems and the broader societal context in which they operate.

Viktoria Henkemeier and Samuel Goodger, policy analysts at the European Policy Centre, noted that gender mainstreaming across EU digital policies is needed beyond the actions set out in the recently adopted Gender Equality Strategy. Their analysis emphasizes that since AI reflects the ethics, values, and biases of those who fund and develop it, real-world inequalities are embedded within AI systems from the start .

What Needs to Change in AI Governance?

Experts argue that the EU must take several concrete steps to mainstream gender in AI policy and prevent gender-based harms by design:

  • Gender-Sensitive Standards: Introduce audit methodologies to check for gender bias in AI systems, in line with the EU Charter on Fundamental Rights, ensuring that gender equality is treated as a fundamental right rather than an afterthought.
  • Intersectional Impact Assessments: Require AI developers to conduct impact assessments that consider how AI systems affect people at the intersection of multiple marginalized identities, not just single categories of risk.
  • Board-Level Accountability: Establish clear governance requirements that make senior leadership responsible for ensuring AI systems do not perpetuate gender-based harms, similar to how financial institutions are held accountable for compliance.
  • External Review Mechanisms: Move beyond allowing companies to self-assess systemic risks and implement independent review processes before high-risk AI systems are deployed.

The path forward requires recognizing that technology does not exist in a vacuum. Only by addressing both the technical design of AI systems and the broader equality policies that shape the societal context in which technology develops can the EU hope to prevent AI from becoming another tool for perpetuating violence against women .