Why Europe's AI Regulators Are Being Left Out of Critical Security Decisions
European regulators have been largely shut out of accessing and testing Anthropic's latest powerful AI model, Mythos, which can find and exploit computer vulnerabilities better than most human experts. The incident reveals a fundamental problem in global AI governance: private companies, not governments, are making decisions about how risky AI technology gets distributed and tested, leaving entire regions without a seat at the table .
What Happened With Anthropic's Mythos Model?
Anthropic announced last week that it was restricting access to Mythos, a new AI model that outperforms most humans at discovering and exploiting cybersecurity vulnerabilities. The company handpicked 12 U.S. technology companies, including Apple, Microsoft, and Amazon, as its closest circle of partners. It also granted access to another 40 organizations but did not name them publicly .
When POLITICO contacted officials from eight national European cybersecurity agencies, only Germany's agency confirmed it had entered into conversations with Anthropic about Mythos, and it had not yet been able to test the model. Several other European government institutions reported receiving only limited, piecemeal access. This contrasts sharply with the United Kingdom, where the AI Security Institute had already tested Mythos and released its assessment .
"Mythos gives us an early taste of how crucial access to frontier AI capabilities is going to be in the years to come. Europe currently does not have a plan for how to secure that access," said Daniel Privitera, founder of the Berlin-based AI nonprofit KIRA.
Daniel Privitera, Founder of KIRA
Why Does This Matter for European Security and Sovereignty?
The exclusion of European regulators from testing a powerful AI model highlights a deeper problem: the world has no global system to oversee the risks of frontier AI development, despite repeated warnings from leading AI researchers about the technology's potential impact on economies, labor markets, and even existential risks .
Claudia Plattner, Germany's chief cybersecurity official who leads the national cybersecurity agency BSI, raised a critical question about whether tools "of such extraordinary power" like Mythos will be available on the open market in the future. She emphasized that this question "has profound implications for national and European security and sovereignty" .
She
"It is deeply concerning that it is up to tech companies, rather than regulators, to decide how to handle the risks," said Yoshua Bengio, one of the three godfathers of AI from the Université de Montréal. He stressed that it was "essential" to set up ways for governments or third parties to run checks on the technology "to protect the public."
Yoshua Bengio, Université de Montréal
How Are European Regulators Responding to This Gap?
The European Union has the EU AI Act, which entered into force in August 2024 and is widely considered the most comprehensive attempt anywhere to regulate AI through binding law. However, the Act's enforcement mechanisms are running into serious implementation problems. The first major obligations for high-risk AI systems were supposed to take effect in August 2026, but in November 2025, the European Commission proposed delaying those obligations by up to 16 months, pushing the deadline to December 2027 at the earliest .
The reasons for the delay reveal institutional failures rather than deliberate policy changes. The standardization bodies responsible for developing technical standards that companies would use to demonstrate compliance missed their 2025 deadline. Without those standards, companies would have been required to meet obligations with no agreed method for doing so. Additionally, many EU member states missed their own August 2025 deadline to designate the national competent authorities responsible for enforcement .
The Commission's AI Office does maintain a dialogue with Anthropic under the EU's code of practice, but European Commission spokespeople did not comment on whether the Mythos model was part of those talks or whether the office has had access to test it .
Steps for Understanding the Broader Regulatory Landscape
- EU AI Act Status: The law entered force in August 2024, but high-risk AI system obligations have been delayed from August 2026 to December 2027, affecting AI used in hiring, credit decisions, education, and welfare systems.
- U.S. State-Level Regulation: More than 28 states have passed AI-related legislation, with California's SB 53 and New York's RAISE Act representing the most serious attempts to impose obligations on frontier AI developers, though these are now being challenged by the federal government.
- Global Governance Gap: Multiple initiatives exist, including the G7's Hiroshima Process and the UN's Global Dialogue on AI Governance, but they lack the political backing to deliver actual oversight of private AI companies.
What Does This Mean for the Future of AI Governance?
The Mythos situation exposes a critical vulnerability in how the world manages frontier AI technology. Marietje Schaake, a former European Parliament lawmaker who helped shepherd an EU code of practice for advanced AI developers, noted that "the fact that models with far-reaching impact are governed by a private company is concerning." She argued that "now is a good moment" for the world to agree on how to disclose "sensitive corporate information and oversight" .
Laura Caroli, an independent AI researcher who advised on drafting the EU AI Act, explained that the EU was "sidelined" because Mythos has not been released on the market. If it were, Anthropic would face binding rules and commitments under EU law. However, she noted that the EU could maintain some oversight through the network of AI safety institutes, of which the Commission's AI Office is a member .
The broader context makes this exclusion even more significant. The EU and United States are both struggling with the gap between writing AI rules and making them work in practice. In Europe, the problem is institutional and technical: organizations tasked with building compliance infrastructure did not deliver on schedule. In the United States, the problem is political: the federal government has not built a national framework, and states that filled the gap with legislation are now being challenged by federal authority without being offered an alternative .
"It makes you wonder if it wasn't Anthropic, but China's DeepSeek," said Laura Caroli, highlighting the geopolitical implications of allowing private companies to control access to powerful AI models.
Laura Caroli, Independent AI Researcher
For now, European officials remain at the mercy of Anthropic and other U.S.-based AI companies when it comes to accessing and testing frontier models. The European Commission's digital spokesperson Thomas Regnier stated that the executive was "currently assessing possible implications" with regard to EU legislation and was keeping tabs on the "security implications" of the technology. However, without direct access to test models like Mythos, European regulators cannot fulfill their mandate to protect citizens from AI-related risks .