Europe's landmark AI Act has a critical weakness: it offers almost no practical way for people harmed by AI systems to actually get justice or compensation. While the regulation bans the most dangerous AI applications and requires safety testing for high-risk systems, it leaves individuals with minimal tools to seek redress when things go wrong. As the European Commission considers weakening protections further through its AI Act Omnibus proposal, experts are sounding the alarm that the law is missing essential safeguards. What Does "Redress" Actually Mean in AI Regulation? Redress refers to the practical mechanisms that allow people to challenge AI decisions, obtain explanations, file complaints, and seek compensation when AI systems cause harm. The AI Act does include some basic protections: individuals have a right to obtain an explanation for certain AI decisions and can file complaints with regulators. However, these mechanisms fall far short of what's needed. The right to explanation can be interpreted narrowly, limiting its usefulness. The complaints mechanism, while open to anyone, offers no procedural safeguards for people filing complaints, does not require regulators to investigate or respond, and lacks judicial oversight. In other words, someone could file a complaint and receive no meaningful follow-up. Why Existing EU Laws Aren't Filling the Gap? The AI Act's architects assumed that existing EU laws would provide adequate remedies for AI-related harms. They pointed to the General Data Protection Regulation (GDPR), which governs how personal data is processed, and equality and non-discrimination laws as sufficient safeguards. However, research from the Centre for Democracy and Technology Europe reveals this assumption is flawed. The GDPR does offer stronger protections than the AI Act, including rights of action against both companies that misuse data and regulators who fail to enforce the law. Data protection authorities have real procedural power to investigate and impose penalties. But the GDPR was designed for data processing, not the complex, multi-layered development of AI systems where responsibility is often unclear. When multiple companies contribute to building an AI model, the GDPR's framework struggles to assign accountability. Equality and non-discrimination laws offer another potential pathway, particularly through the burden-of-proof reversal: if someone can show preliminary evidence of discrimination, the defendant must prove they didn't discriminate. This is powerful in cases of algorithmic bias. Yet these laws focus on individual lawsuits, which are expensive and risky for ordinary people. They also fail to address the structural power imbalances that make algorithmic discrimination possible in the first place. Steps to Strengthen AI Redress Pathways - Enhance Documentation Requirements: The AI Act's documentation and registration obligations could be valuable tools if strongly implemented and enforced. However, the current omnibus compromise texts would weaken these requirements by allowing simplified compliance for more companies and permitting omissions of key information when registering high-risk systems. Strengthening rather than weakening these requirements is essential for accountability. - Expand Collective Redress Options: The Representative Actions Directive allows consumer groups to bring lawsuits on behalf of affected individuals, potentially overcoming the barriers of individual litigation. However, the directive leaves funding decisions to individual member states, creating inconsistent access to justice. Standardizing support for representative actions could make this pathway more viable. - Implement Pre-Litigation Disclosure: The withdrawn AI Liability Directive included a provision requiring companies to disclose evidence to potential claimants before court proceedings began. The revised Product Liability Directive only allows evidence requests after litigation starts, leaving individuals hesitant to pursue claims. Restoring pre-litigation disclosure mechanisms would level the information imbalance between individuals and AI companies. - Include Non-Material Harm Compensation: Current frameworks do not require compensation for non-material harms like privacy violations or mental health impacts, despite AI's serious potential to cause these injuries. Making compensation for non-material harms mandatory across member states would better protect individuals. What's Happening to the AI Act Right Now? The European Commission's AI Act Omnibus proposal, released recently, suggests changes that would significantly weaken safeguards rather than strengthen them. These are not minor technical adjustments; they would reduce protections for systems deemed most dangerous to health, safety, and fundamental rights. Civil society organizations have repeatedly warned against these changes, yet the debate continues. Meanwhile, enforcement of the AI Act is beginning in earnest. As of March 2026, the EU AI Act entered its enforcement phase after being in force since 2024. The system works similarly to food safety inspection: regulators do not monitor every AI application constantly. Instead, they respond to complaints, conduct spot checks on large providers, and pull dangerous systems from the market when identified. Each EU country designates its own AI regulators, often building on existing authorities like telecom watchdogs or competition agencies, coordinated through a central office in Brussels. The enforcement approach targets high-stakes uses: massive frontier models trained with enormous computing resources, companion chatbots designed to act as emotional partners, and tools used to decide employment, loans, social benefits, or police interventions. Banned systems like manipulative social-scoring tools face immediate withdrawal and fines up to 35 million euros or 7 percent of global annual revenue. High-risk failures, such as biased hiring algorithms, trigger audits, mandated fixes, and penalties up to 15 million dollars or 3 percent of revenue. How Are Companies Adapting to AI Governance Requirements? For organizations using AI, the regulatory landscape is fragmented but increasingly stringent. The EU AI Act applies globally to any company doing business in Europe, much like the GDPR did for data privacy. The UK takes a lighter-touch approach, empowering existing regulators like the Information Commissioner to manage AI within their domains. The United States lacks a single federal AI law, instead relying on state-level rules and agency guidance. California has emerged as a regulatory leader. Its rules, in force since January 2026, target large labs training cutting-edge AI models and apps that mimic human friends, particularly emotional chatbots that could mislead vulnerable users including children and teenagers. Violations can lead to lawsuits or Attorney General actions, with minimum penalties of 1,000 dollars per harmed user. Core principles are emerging across jurisdictions. Transparency requires clear labeling when content is AI-generated; the FTC has made clear that deceptive use of AI in advertising violates consumer protection law. Fairness and non-discrimination mandate bias detection for high-risk systems like hiring tools. Accountability requires formal risk management systems for higher-risk AI. Human oversight ensures people can intervene and override high-risk decisions. A practical best practice for organizations is to benchmark AI governance against the strictest applicable standards. This approach is more defensible and efficient than trying to meet different requirements in different jurisdictions. What's the Real Challenge in Enforcing These Rules? Designing regulations on paper is relatively straightforward compared to building the institutions that will actually enforce them. The AI Act, California's laws, and similar efforts assume that regulators will pick up the phone, read complaints, investigate, and write decisions. This requires new bodies and expanded staff at existing agencies, along with technical expertise and funding. Wealthy Western European countries can often expand existing authorities and hire specialized staff. Smaller or less wealthy countries face significant challenges. Some are still struggling with basic digitalization and have not even decided which ministry should oversee AI. Simply copying EU or California rules on paper risks creating regulations that exist in law but cannot be enforced in practice. To address this gap, countries can pool resources through regional expert teams, joint investigation units, or shared specialized labs serving multiple nations. Cross-border networks of regulators, potentially supported by the EU, Council of Europe, or UN agencies, can reduce costs and prevent smaller states from becoming dependent on technical advice from the companies they are supposed to regulate. Without such cooperation and capacity building, enforcement will remain inconsistent and incomplete. The bottom line: as AI regulation moves from theory to practice, the redress gap represents a fundamental flaw. Without meaningful pathways for individuals to challenge AI decisions and seek compensation, regulations become symbolic rather than protective. Europe's current approach leaves people vulnerable precisely when they need protection most.