Europe's AI Regulators Are Being Shut Out of Critical Security Tests

European regulators have been largely excluded from testing Anthropic's new Mythos AI model, a powerful system designed to find and exploit cybersecurity vulnerabilities, despite the EU's ambitious AI Act framework. When Anthropic announced the model last week, it granted access to a handpicked circle of 12 U.S. technology companies, including Apple, Microsoft, and Amazon, plus roughly 40 unnamed organizations. Of eight European national cyber agencies contacted by POLITICO, only Germany's cybersecurity agency reported having entered conversations with Anthropic about the model, and it had not yet been able to test it .

The exclusion underscores a troubling reality for the European Union: despite positioning itself as a global superregulator of technology through its AI Act, the bloc has limited leverage over American companies developing cutting-edge AI systems. The contrast is particularly stark when compared to the United Kingdom, where the AI Security Institute recently tested Mythos and released its assessment on Monday .

Why Does Access to Frontier AI Models Matter for European Security?

Mythos represents a watershed moment in AI development. Anthropic claims the model outperforms most humans at identifying and exploiting software vulnerabilities, making it both a powerful defensive tool and a potential weapon in the hands of malicious actors. The decision to restrict access raises immediate questions about cybersecurity preparedness across Europe. Without the ability to test and understand how such models work, European governments cannot adequately assess risks to critical infrastructure, financial systems, or national security .

Germany's chief cybersecurity official, Claudia Plattner, highlighted the stakes in a statement to POLITICO. "A pressing question is whether tools of such extraordinary power like Mythos will in future be on the open market," she said, adding that the answer "has profound implications for national and European security and sovereignty" .

"Mythos gives us an early taste of how crucial access to frontier AI capabilities is going to be in the years to come. Europe currently does not have a plan for how to secure that access," said Daniel Privitera, founder of the Berlin-based AI nonprofit KIRA.

Daniel Privitera, Founder at KIRA

The lack of access also creates a practical problem for European cyber agencies. Job Holzhauer, a spokesperson for the Dutch cybersecurity agency, explained that "the actual impact of the vulnerabilities found is difficult to verify without technical details" . Without hands-on testing, European officials cannot independently validate Anthropic's claims about the model's capabilities or understand which types of systems are most at risk.

How Can European Governments Maintain Oversight of Advanced AI Models?

  • Leverage the EU AI Act's Code of Practice: The European Commission's AI Office maintains a dialogue with Anthropic under the EU's code of practice designed to help companies comply with AI Act requirements. European officials could use this channel to demand more transparency about model capabilities and request testing access before models are deployed commercially.
  • Strengthen the Network of AI Safety Institutes: The Commission's AI Office is part of a growing international network of AI Safety and Security Institutes. European governments could invest in building out this network and establishing formal protocols requiring companies to share frontier models with these institutes before public release.
  • Establish Binding Disclosure Requirements: The EU could amend its AI Act or related cybersecurity regulations to require companies to disclose sensitive information about high-risk models to European regulators, similar to how pharmaceutical companies must share safety data with health authorities before drug approval.

Laura Caroli, an independent AI researcher who advised on the drafting of the EU's 2023 Artificial Intelligence Act, acknowledged the EU's current limitations. "The EU is sidelined because the model is not released on the market," she explained. "If it was, Anthropic would have binding rules and commitments under EU law." However, Caroli noted that the EU could maintain some oversight through the network of AI safety institutes .

The European Commission's digital spokesperson, Thomas Regnier, stated that the executive was "currently assessing possible implications" with regard to EU legislation and monitoring the "security implications" of the technology. Under the AI Act, providers like Anthropic must address cyber risks stemming from their models, and the bloc's Cyber Resilience Act imposes mandatory cybersecurity requirements "for all products with digital elements placed on the EU market," Regnier said .

What Does This Gap Reveal About Global AI Governance?

The Mythos situation exposes a fundamental weakness in the global approach to AI safety. Despite numerous international initiatives, including the G7's Hiroshima Process, the United Nations' Global Dialogue on AI Governance, and a network of AI Safety and Security Institutes, no binding global mechanism exists to scrutinize and police what companies like Anthropic do with risky technology .

"The fact that models with far-reaching impact are governed by a private company is concerning," said Marietje Schaake, a former European Parliament lawmaker and former adviser to the European Commission who helped shepherd the EU code of practice for developers of the most advanced AI models.

Marietje Schaake, Former European Parliament Lawmaker and Former Adviser to the European Commission

Yoshua Bengio, one of the three godfathers of AI and a researcher at the Université de Montréal, told POLITICO it was "deeply concerning" that it's up to tech companies rather than regulators to decide how to handle AI risks. He emphasized that it was "essential" to set up ways for governments or third parties to run checks on the technology "to protect the public" .

The exclusion of European regulators also raises a troubling hypothetical. As Caroli pointed out, "It makes you wonder if it wasn't Anthropic, but China's DeepSeek?" The question highlights how geopolitical competition in AI development could undermine European security if the continent lacks the access and influence needed to understand frontier capabilities developed by foreign companies .

For now, European officials are at the mercy of American tech companies. The EU's AI Act represents an ambitious attempt to regulate artificial intelligence, but the Mythos case demonstrates that regulatory frameworks alone cannot guarantee access to the systems they're meant to oversee. Without a coordinated international approach and binding agreements requiring companies to share frontier models with government agencies, Europe risks being left in the dark about the most powerful AI systems shaping the future of cybersecurity and national defense.