The EU's AI Simplification Plan Is Actually Making Things Riskier for Citizens, Critics Warn

The European Union's push to simplify its digital rulebook is backfiring, according to human rights advocates who argue that the proposed Digital Omnibus Package actually strips away critical protections for citizens rather than modernizing them. While the European Commission frames the initiative as reducing bureaucratic complexity for businesses, critics contend it fundamentally weakens the EU AI Act and GDPR (General Data Protection Regulation) by redefining personal data protections and removing transparency requirements that shield vulnerable groups from algorithmic harm .

What Is the Digital Omnibus Package Trying to Do?

The Digital Omnibus Package consists of two main legislative proposals designed to consolidate the EU's fragmented digital regulations. The European Commission introduced these acts to create a more cohesive regulatory framework that would simplify compliance for companies operating across EU member states. The stated goal is to reduce administrative burden while maintaining protection for fundamental rights, including data protection, consumer rights, and privacy .

On the surface, this sounds reasonable. Companies currently navigate overlapping requirements from the GDPR, the Data Act, the ePrivacy Directive, and the EU AI Act. A unified rulebook could theoretically make compliance easier. However, the devil is in the details. The proposed amendments include what advocates describe as a "substantive redefinition" of personal data that could significantly weaken existing protections .

How Could These Changes Harm Vulnerable Groups?

Human rights organizations and advocacy groups have raised alarm about specific provisions that could disproportionately affect vulnerable populations. The concerns center on three main areas: weakened access rights, removed transparency requirements, and diminished accountability for high-risk AI systems .

  • Reduced Access Rights: The proposed changes significantly diminish individuals' ability to exercise their right of access, a fundamental protection that allows people to understand what personal data is being collected about them and how it is being processed. This makes it harder for data subjects to monitor or contest the exploitation of their information.
  • Removed Risk Assessment Mandates: By eliminating requirements for companies to publish risk assessments for high-risk AI systems, the Digital Omnibus Package grants technology firms the power to define their own systems' risk profiles without external scrutiny or challenge. This is particularly concerning for biometric systems, AI used in law enforcement, and employment management tools.
  • Weakened Enforcement for High-Risk Systems: The amendments risk compromising enforcement of established rules, particularly for high-risk AI systems that present profound hazards to health, safety, and fundamental rights of European citizens, including facial recognition and predictive policing tools.

Amnesty International has documented real-world examples of how legally compliant AI systems can still violate human rights. In Hungary, facial recognition technology was used to target peaceful assemblies such as Pride Marches in Budapest and Pecs. Across multiple EU member states, including Denmark, France, Sweden, and the Netherlands, fraud detection algorithms integrated into "digital welfare state" systems have discriminated against vulnerable groups including ethnic minorities, low-income demographics, and displaced persons .

"These sets of laws seem to favour the lobby done by technology companies that could further broaden data collection methodologies and reinforce business models based on actively recording people across EU's territory," noted advocacy organizations analyzing the package.

Advocacy organizations, Human Rights Research Center

The concern is that the Digital Omnibus Package's "simplification" process effectively serves as a deregulation initiative likely to benefit commercial interests. Technology companies could gain extensive access to private data, especially with the rise of AI usage, while individuals lose the tools to understand or challenge how their information is being used .

Steps to Protect Your Digital Rights During Regulatory Changes

  • Stay Informed About Your Rights: Understand your current rights under GDPR, including the right to access your personal data, the right to be forgotten, and the right to object to automated decision-making. Monitor how proposed regulatory changes might affect these protections.
  • Request Data Access Regularly: Exercise your right to access the personal data organizations hold about you. This practice helps you understand what information is being collected and how it is being used, and it creates a record of your engagement with data protection rights.
  • Engage with Advocacy Organizations: Support and participate in campaigns by human rights groups monitoring AI regulation. These organizations track proposed changes and mobilize public input during consultation periods when policymakers are still considering amendments.
  • Document AI Interactions: Keep records of when you interact with AI systems, particularly in high-risk contexts like employment screening, welfare eligibility determination, or law enforcement interactions. This documentation can be valuable if you need to challenge algorithmic decisions.

What Happens Next in the EU Regulatory Process?

The European Commission's legislative proposals are not yet final. The Digital Omnibus Package will undergo further negotiations with both the European Council and the European Parliament, which are beginning to resist the most problematic provisions within the AI Omnibus . This creates a window for public input and advocacy before the rules are finalized.

The timing matters because the EU AI Act itself is already being phased in. The first tier of banned AI practices took effect on February 2, 2025, covering practices like subliminal manipulation, exploiting user vulnerabilities, social scoring, and untargeted facial recognition scraping. High-risk AI systems face compliance deadlines of August 2, 2026, with product-embedded systems following on August 2, 2027 .

If the Digital Omnibus Package weakens these protections before they are fully implemented, the practical impact could be substantial. Companies would have less incentive to invest in the transparency, documentation, and human oversight requirements that the AI Act currently mandates for high-risk systems .

Why Is This Happening Now?

The push for regulatory simplification reflects genuine frustration from businesses about compliance complexity. However, it also reflects broader geopolitical tensions. The EU is simultaneously trying to achieve "digital sovereignty" by reducing dependence on U.S. technology companies while maintaining competitiveness in AI innovation . This creates pressure to reduce regulatory barriers that might slow European AI development relative to the United States.

The challenge is that simplification and protection are not always compatible. The EU AI Act was designed with a risk-based approach, meaning stricter obligations apply to higher-risk systems. Removing transparency and accountability requirements might make compliance easier for companies, but it also removes the mechanisms that allow regulators and citizens to identify when AI systems cause harm .

The Human Rights Research Center acknowledges the EU's efforts to establish more cohesive digital rules but emphasizes that any policy must adhere to principles found in international human rights law. The question now is whether the European Parliament and Council will agree with that assessment or prioritize business simplification over citizen protection .