Europe's two most powerful legislative bodies have adopted competing visions for the EU AI Act, creating a fundamental disagreement over which artificial intelligence systems should face the strictest safety requirements. The European Parliament and the Council of the European Union finalized their positions in March 2026, and the differences between them reveal a deeper tension: how much should sectoral regulators (like aviation or medical device authorities) handle AI risks versus a unified EU framework. What's Actually Changing in Europe's AI Rulebook? The most contentious shift involves high-risk AI systems that are embedded in products already regulated by other EU laws. The Parliament's position would largely exclude these systems from the AI Act's scope, leaving oversight to existing sectoral regulators instead. This move aligns with industry demands but has drawn sharp criticism from civil society organizations, consumer advocates, and public interest groups who worry that fragmented oversight creates gaps in protection. Both the Parliament and Council have also agreed to ban AI systems that generate non-consensual deepfake intimate imagery, a prohibited practice that echoes similar bans already proposed by the Council. Additionally, both positions require AI providers to register systems in an EU-wide database even when they believe the system doesn't qualify as high-risk, and both reinstate AI literacy obligations for companies deploying these tools. However, a significant delay looms. Both the Parliament and Council support postponing enforcement of high-risk AI system obligations by over one year, which means companies won't face compliance deadlines as quickly as originally planned. This extension gives businesses more time to prepare but also delays the protective measures that regulators intended to implement. Why Is the Disagreement Over National Authority So Important? The Council's position introduces a provision that would let individual EU member states oversee AI systems built on general-purpose AI (GPAI) models when the same provider develops both the model and the system. This approach preserves national competence and reflects concerns from some member states about losing control over AI governance within their borders. The Council also outlines how the EU AI Office, the new enforcement body, would coordinate with national authorities. This split matters because it determines whether Europe will have a unified standard for AI safety or a patchwork of national rules. A unified approach could prevent companies from shopping for the most lenient regulator, but it also risks imposing one-size-fits-all rules that don't account for local needs. National oversight, conversely, allows flexibility but risks creating loopholes where AI risks slip through cracks between jurisdictions. How to Navigate the Upcoming Trilogue Negotiations The next phase involves trilogue negotiations, where Parliament, Council, and the European Commission will hammer out a final compromise text. Companies and advocates should prepare for several key areas of focus: - Scope Clarification: Expect intense debate over which AI systems fall under the AI Act versus sectoral laws, with industry pushing for broader exemptions and civil society pushing for tighter definitions. - Enforcement Timeline: The one-year delay for high-risk obligations will likely be negotiated, with some parties arguing for faster implementation and others seeking extensions. - National Authority Powers: The Council and Parliament will need to agree on how much autonomy member states retain versus how much power the EU AI Office exercises, which will reshape how companies comply across borders. - Deepfake Protections: The ban on non-consensual intimate imagery deepfakes is likely to survive, but the definition of "effective safety measures" will require clarification. - Data Processing for Bias Detection: Both positions broaden the derogation allowing providers to process sensitive data to detect and correct bias, a move that expands data use rights but raises privacy concerns. The trilogue process typically takes months, and disagreements this fundamental suggest the final AI Act may not be finalized until late 2026 or beyond. Companies operating in Europe should monitor these negotiations closely, as the final rules will determine compliance costs and competitive dynamics. What About Copyright and AI-Generated Content? Beyond the AI Act itself, the European Parliament adopted a separate report on copyright protections for generative AI. The Parliament called on the European Commission to ensure that copyrighted material used to train AI systems is fully transparent and fairly compensated, and to create a new licensing market for such content. The report also recommends that content fully created by AI should not receive copyright protection, and that individuals need stronger protections against manipulated and AI-generated content. Civil society groups acknowledged improvements in the final report, particularly the recognition that text and data mining exceptions apply to AI training. However, they argued that the report doesn't solve the core problem: the massive bargaining power imbalance between large AI companies and content creators, publishers, and news organizations. Without addressing this structural inequality, a licensing market may simply formalize unfair deals rather than create genuine fairness. Why Is Public Trust in AI Collapsing in the United States? While Europe debates regulatory frameworks, the United States faces a deeper crisis of confidence in AI itself. A 2025 poll conducted by Gallup and the Special Competitive Studies Project found that 60 percent of Americans distrust AI somewhat or fully, a stark contrast to large majorities in China, Indonesia, and Thailand, where 75 to 80 percent believe AI-powered products offer more benefits than drawbacks. In the United States, only 39 percent hold that optimistic view. Multiple factors drive this skepticism. Safety concerns dominate public discourse, including fears about AI-driven psychosis and AI-enabled teen suicides. Environmental worries about data center energy and water consumption have sparked boycott campaigns on social media. Recent mass layoffs, such as fintech company Block's decision to cut 40 percent of its workforce due to AI integration, have amplified fears about widespread job displacement. While some studies suggest AI will create more jobs than it eliminates, public conversation has focused heavily on the prospect of significant job losses, raising anxiety among white-collar workers. This unease is reshaping politics. More than 1,500 AI-related bills were introduced in state legislatures in 2026 alone, many focused on protecting consumers and minors from AI harms. Data centers have drawn criticism from both left-leaning environmental advocates and deep-red communities. A study found that 20 data center projects were blocked in the second quarter of 2025 due to local opposition, representing $98 billion in stalled investment. At least six Democratic governors announced plans to roll back data center incentives or impose new regulations, and lawmakers in New York, Maine, and Oklahoma are calling for temporary bans. How Did the Anthropic-Pentagon Standoff Expose Deeper Fractures? The recent breakdown between Anthropic and the Pentagon over AI contract terms crystallized these tensions. Anthropic drew a hard line on two issues: the use of its AI models for mass surveillance of US citizens and in autonomous weapons systems. When the Pentagon designated Anthropic a supply chain risk following failed negotiations, the company filed suit, attracting amicus briefs from tech workers, Catholic theologians, the American Civil Liberties Union, and nearly 40 employees from Google and OpenAI, including Google's chief scientist. The controversy intensified when OpenAI CEO Sam Altman announced on February 27 that his company had signed a Pentagon deal with provisions Anthropic had fought for. Public reaction was swift and skeptical. Uninstalls of the ChatGPT app jumped 295 percent overnight, and a #QuitGPT campaign gained traction on social media. Some OpenAI employees publicly criticized their company's stance, and the company's hardware lead resigned in protest. The amicus brief from Google and OpenAI employees affirmed shared belief in the risks underlying Anthropic's contractual red lines. Their brief noted the dangers to US democracy posed by AI-enabled surveillance and warned that today's AI systems are too immature to be relied on for use in lethal autonomous weapons. This episode revealed a fracture in the social contract between AI companies, the federal government, and the American public. The Trump administration's position that contracts should provide flexibility for "all lawful uses" of AI runs counter to public opinion. An overwhelming 80 percent of US adults believe the government should maintain rules for AI safety and data security, even if doing so slows development. Europe's regulatory approach, despite its flaws and delays, reflects this public demand for guardrails. The EU AI Act, even in its fractured form, attempts to establish baseline safety standards and oversight mechanisms. The United States, by contrast, is caught between an administration pushing for maximum AI development and a public increasingly skeptical of the technology's risks. The Anthropic standoff suggests that without clearer guardrails and public trust, the US may face growing resistance to AI deployment, particularly in sensitive domains like defense and surveillance.