Europe's AI Act is under pressure to change, but the direction matters enormously for how well ordinary people are protected when AI systems harm them. The European Commission has proposed modifications through an AI Act Omnibus proposal that critics say would significantly weaken safeguards against the most dangerous AI systems, while industry groups argue the rules are too costly and complex. The real problem, experts warn, is that neither approach adequately addresses how people can actually get justice when AI causes harm. What Exactly Is Weakening in Europe's AI Protections? The proposed changes would make it easier for companies to comply with the AI Act by simplifying requirements for what the law considers "high-risk" AI systems, those deemed most dangerous to health, safety, and fundamental rights. The European Commission's omnibus proposal would allow simplified compliance for a larger number of companies and permit the omission of key information when registering high-risk systems under certain exemptions. This matters because high-risk systems are the ones that need the most oversight, whether they're used in hiring decisions, medical diagnoses, or criminal justice. Beyond these technical changes, the bigger issue is that the AI Act's current approach to helping people who are harmed by AI systems is fundamentally weak. The law offers individuals a right to obtain an explanation about how an AI system made a decision affecting them, plus a complaints mechanism. But that's where the protections largely end. There's no requirement that complaints be investigated, no judicial oversight of the process, and no procedural safeguards for people filing complaints. In other words, you can complain, but there's no guarantee anyone will listen or that you'll get a meaningful response. Why Can't People Get Justice When AI Harms Them? The EU assumed that existing laws would provide adequate remedies for people harmed by AI, but research from the Center for Democracy and Technology Europe reveals this assumption is misguided. Several existing frameworks could theoretically help, but each has significant gaps when it comes to AI-specific harms. The General Data Protection Regulation (GDPR), which governs how personal data is used, has been a crucial tool for holding companies accountable. It gives people rights to challenge data processing and provides data protection authorities with strong enforcement powers. However, the GDPR struggles with AI because of what experts call "the problem of many hands." AI systems are developed by multiple parties, each contributing different pieces, which obscures who is actually responsible when something goes wrong. Data regulators have tried to provide guidance on this, but the fundamental challenge remains. Equality and non-discrimination law offers another potential pathway, with a unique feature: it shifts the burden of proof, meaning once someone shows evidence of potential discrimination, the company must prove it didn't discriminate. This is powerful because it addresses the information imbalance between individuals and AI developers. However, this approach focuses on individual cases rather than addressing the underlying structural problems that create discrimination in the first place. It also forces individuals to bear the costs and risks of litigation, which is expensive and time-consuming. How to Strengthen AI Accountability: What Needs to Happen - Strengthen the AI Act's redress chapter: The law should include mandatory investigation of complaints, procedural safeguards for complainants, and judicial oversight requirements to ensure people actually get answers when they file complaints about AI harms. - Enforce documentation and registration requirements: The AI Act requires companies to document how their high-risk systems work and register them in a database. These requirements must be strongly implemented and enforced, not weakened through exemptions that allow companies to omit key information. - Create AI-specific pathways in existing laws: The GDPR, product liability rules, and consumer protection frameworks need updates to address the unique challenges AI poses, such as clarifying responsibility chains and making evidence disclosure available before court proceedings begin. - Expand collective redress options: The Representative Actions Directive allows groups to bring lawsuits on behalf of affected consumers, but funding and cost barriers make this difficult. The EU should reduce these barriers so representative entities can more easily pursue cases involving AI discrimination or harm. The proposed AI Liability Directive was supposed to fill these gaps by creating a dedicated framework for compensation when AI causes harm. However, the European Commission withdrew this proposal last year. The revised Product Liability Directive offers some alternatives, but it has shortcomings. For example, it only allows courts to request evidence from defendants after a lawsuit has already been filed, which means individuals often lack the information needed to even know whether they have a case worth pursuing. Why Is Industry Pushing Back on the AI Act? While civil society groups warn against weakening protections, European companies argue the AI Act is already too expensive and complex. According to DIGITALEUROPE, a trade association representing Europe's digital technology industry, the annual compliance cost for the AI Act alone is estimated at 3.3 billion euros. When combined with other recent regulations like cybersecurity rules and data-sharing requirements, companies face cumulative compliance costs that have increased by 13 percent over the past six years. For small manufacturers, the burden is particularly acute. A company with 50 employees that develops an AI tool in sectors like medical devices or industrial machinery could face initial compliance costs between 320,000 and 600,000 euros, plus up to 150,000 euros annually in ongoing costs. For a small firm, this can consume up to 40 percent of profits, making it difficult to invest in innovation. DIGITALEUROPE argues that these companies are already governed by some of the world's strictest safety regulations in their sectors, and adding an entire new AI framework on top seems redundant. Industry groups point to specific examples of innovation being abandoned due to compliance burden. One European manufacturer developed an AI camera system to reduce false alarms in safety systems by distinguishing between people and animals, but abandoned the project after assessing the documentation, testing, and monitoring requirements. Another major automotive company built an AI platform to help employees automate tasks, generating 300 new applications per week that could improve efficiency, but faced the prospect of needing to document each application separately as a new AI model under the law. The broader concern is that Europe is already investing only 7.5 percent of global AI investment, far behind the United States and China. If compliance costs reduce AI investment by an estimated 20 percent, Europe risks falling further behind in a technology that will shape the economy for decades. What's the Real Problem Here? The tension between these two positions reveals a genuine dilemma. Weakening protections to reduce compliance costs could leave people vulnerable to AI harms with little recourse. But overly complex rules that stifle innovation could mean fewer AI applications are developed in Europe, limiting both the benefits and the opportunities to learn how to govern AI responsibly. The sources suggest the real issue is that the AI Act was designed without adequate mechanisms for people to seek justice when harmed, and neither the proposed weakening nor the current rules address this fundamental gap. Rather than simply choosing between weaker rules or stricter ones, the EU could focus on what both sides actually need: clearer responsibility chains, faster and cheaper ways for people to challenge AI decisions, and simplified compliance pathways for companies that are already heavily regulated in their sectors. The stakes are high because how Europe handles this will likely influence how other countries regulate AI, and the decisions made now will determine whether ordinary people have meaningful protection when AI systems affect their lives.