Why US and EU Lawmakers Are Finally Finding Common Ground on AI Safety
US and European lawmakers are discovering that their approaches to AI regulation are far less divided than the popular "US innovates, Europe regulates" stereotype suggests. A recent transatlantic policy exchange revealed that both sides share fundamental concerns about protecting citizens and boosting competitiveness, even as they pursue different regulatory strategies. The findings challenge assumptions that have dominated tech policy debates and point toward concrete opportunities for cooperation .
What Are the Real Barriers to AI Competitiveness Beyond Regulation?
When a bipartisan delegation of US state lawmakers visited Paris and Brussels in December 2025 as part of the Transatlantic Tech Exchange, they expected to find a stark divide between American innovation culture and European regulatory caution. Instead, they discovered something more nuanced: structural obstacles to AI competitiveness extend far beyond regulation and span the entire AI value chain .
European Commission officials discussed their support for a simplification agenda aimed at "radically lightening the regulatory load and related costs," contradicting the perception that Europe wants only to regulate. Meanwhile, the Commission's November 2025 Digital Omnibus proposal would even loosen or eliminate some requirements of the EU AI Act. The real competitive challenges, according to the study tour findings, include fractured capital markets, fragmented venture ecosystems, and gaps in industrial infrastructure that affect both continents differently .
The timing of these discussions underscores the urgency. The Trump administration's National Security Strategy criticized Europe's "failed focus on regulatory suffocation" just days before the delegation arrived in Brussels. Simultaneously, US federal policymakers released Executive Order 14365, which aims to preempt certain state AI legislation to "check the most onerous and excessive laws emerging from the States that threaten to stymie innovation." This federal-state tension mirrors transatlantic disagreements, yet the study tour revealed that American lawmakers were genuinely interested in European regulatory approaches rather than dismissive of them .
Where Can the US and EU Actually Agree on AI Policy?
The study tour identified two clear areas where transatlantic cooperation is not just possible but urgent: children's safety and "AI redlines," or use cases that present unacceptable risk to civil rights or national security .
Children's safety emerged as the strongest convergence point. Lawmakers on both sides expressed shared motivation to protect young people from AI harms, yet they often lack visibility into what other jurisdictions have tried and tested. This knowledge gap creates inefficiency and duplicative policy efforts .
The specific topics ripe for exchange include:
- Age Verification Methods: Privacy-protecting ways to verify a user's age without requiring official identification, which is particularly challenging for children in different countries
- Mental Health Impacts: Understanding how AI chatbots affect children's psychological wellbeing and developing evidence-based safeguards
- Child Sexual Abuse Material: Detection and prevention of AI-generated content that exploits minors, a threat that transcends borders
AI redlines represent the second pillar of potential cooperation. Both US and European policymakers recognize that certain AI applications, such as social scoring systems or malicious cyberattacks on critical infrastructure, should be prohibited outright. Establishing shared definitions of these redlines could prevent a fragmented global landscape where companies face contradictory rules .
How to Build Transatlantic AI Cooperation: A Practical Framework
The study tour produced specific recommendations for turning shared concerns into coordinated action:
- Launch a Formal AI Dialogue: Establish a structured mechanism bringing together EU, member-state, and US state-level legislators to regularly exchange lessons on children's safety and AI redlines, creating accountability and continuity
- Create a Centralized Knowledge Repository: Develop a dashboard or database managed by a neutral third party to house transparency reports, audit results, and red-teaming findings from jurisdictions worldwide, enabling comparison and preventing duplicative efforts
- Invest in Cross-Sectoral Expertise: Recognize that legislatures are not currently structured to address AI's whole-of-society challenges; lawmakers need deeper expertise spanning technology, ethics, economics, and public health
The knowledge-sharing mechanism is particularly important because it would centralize information that currently exists in silos. Transparency mandates, audit reports, and red-teaming results from different states and EU member states could be compared and analyzed to identify patterns, successes, and failures. This approach transforms AI governance from a reactive, fragmented process into an iterative, evidence-informed one .
The timing for such cooperation is critical. Recent US tariff threats and diplomatic pressure to change EU laws like the Digital Services Act and Digital Markets Act have strained transatlantic relations. European Commission President Ursula von der Leyen declared that "we set our own standards, we set our own regulations," signaling Europe's determination to maintain its approach. Yet the study tour demonstrated that this posture does not reflect a fundamental disagreement on AI's risks; rather, it reflects different institutional structures and policy histories .
Why the "US Innovates, Europe Regulates" Narrative Misses the Point
The stereotype that the US prioritizes innovation while Europe prioritizes regulation has dominated tech policy discourse for years. The study tour revealed this framing obscures more than it clarifies. Both sides face identical challenges: balancing rapid AI deployment against thoughtful guardrails, managing heightened political tensions, and navigating AI's sprawling impact across policy domains from healthcare to national security .
Analyses like the 2024 Draghi Report have highlighted regulation as a roadblock to EU AI competitiveness. However, the transatlantic exchange showed that bloc-wide obstacles far beyond regulation constrain European AI development. Fractured capital markets, limited venture funding, and fragmented industrial infrastructure create barriers that no regulatory reform alone can address. Similarly, the US faces its own structural challenges, including the federal-state regulatory fragmentation that Executive Order 14365 attempts to resolve .
The window for transatlantic cooperation may be narrowing. Geopolitical tensions, trade disputes, and competing national security strategies are reshaping technology policy. Yet the study tour demonstrated that shared concerns about children's safety and high-risk AI applications transcend these tensions. By focusing cooperation on these narrow, high-consensus areas first, policymakers can build trust and infrastructure for broader collaboration as AI's impacts become clearer and more urgent.