Inside the NSA's Quiet Bet on Anthropic's AI, Despite Pentagon Pushback
The National Security Agency is deepening its use of Anthropic's Mythos AI model, even as the Department of Defense has expressed serious concerns about the company's security practices and supply chain risks. This contradiction reveals a growing tension within the U.S. government: should agencies prioritize access to cutting-edge cybersecurity tools, or heed warnings from defense officials skeptical of Anthropic's trustworthiness?
Why Is the Pentagon Concerned About Anthropic?
The friction between the Pentagon and Anthropic dates back to contract negotiations earlier this year. The Department of Defense wanted Anthropic's Claude model, the company's family of AI assistants including Claude Haiku, Sonnet, and Opus, available for all lawful government uses. Anthropic, however, drew a clear line in the sand. The company refused to allow its tools to be used for mass surveillance or autonomous weapons systems, a stance that created significant tension with defense officials .
In February, the Department of Defense went further, attempting to pressure vendors to cut ties with Anthropic over supply chain security concerns. Yet despite these official warnings, the NSA has continued to expand its deployment of Mythos, Anthropic's specialized security-focused model. According to sources familiar with the situation, the NSA's use of Mythos is spreading across the department, though the exact scope and methods of deployment remain unclear .
How Are Government Agencies Actually Using Mythos?
While the NSA hasn't publicly detailed how it's deploying Mythos, other organizations have found practical applications for the model. Security researchers and teams across government are using Mythos to hunt down vulnerabilities in their systems, essentially leveraging AI to identify weaknesses before malicious actors can exploit them. This defensive use case represents one of the most valuable applications of advanced AI in cybersecurity .
Anthropic, concerned about the potential for misuse, has taken a cautious approach to Mythos access. The company has limited the model's availability to approximately 40 groups due to fears about its offensive capabilities. This restriction reflects a broader tension in AI development: how do you make powerful security tools available to those who need them while preventing bad actors from weaponizing the same technology?
Steps to Understanding the Government's AI Security Strategy
- Vulnerability Detection: Organizations are using Mythos to identify security weaknesses in their systems before attackers can find them, representing a proactive defense approach.
- Access Control: Anthropic has restricted Mythos to about 40 approved groups to balance security needs with concerns about offensive misuse of the technology.
- Interagency Coordination: High-level meetings between Anthropic's leadership and White House officials suggest the government is working to align AI security strategy across multiple departments.
What Happened at the White House Meeting?
In a significant development, Dario Amodei, Anthropic's CEO and founder, met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss how the government uses Mythos and Anthropic's broader security strategies. While neither the Pentagon nor Anthropic has publicly commented on the meeting, insiders described it as productive. The discussion appears to have focused on how other government departments might responsibly deploy Anthropic's tools .
This high-level engagement suggests that despite the Pentagon's reservations, other parts of the government see significant value in Anthropic's technology. The meeting signals an effort to move past the earlier contract disputes and find a path forward that addresses both innovation needs and legitimate security concerns.
Is the Real Risk Ignoring Powerful Security Tools?
The NSA's decision to expand Mythos use despite Pentagon warnings raises a fundamental question about government decision-making: what's the greater risk, using tools from a company some officials distrust, or missing out on AI capabilities that could strengthen national cybersecurity? While some Defense officials continue to view Anthropic as untrustworthy, others within the government appear ready to move past the earlier disputes and focus on practical security benefits .
This disagreement reflects a larger debate in AI governance. Anthropic has positioned itself as an AI safety company, founded in 2021 by former OpenAI researchers including Dario and Daniela Amodei, with explicit commitments to responsible AI development. The company's refusal to support mass surveillance or autonomous weapons represents a principled stance, but it has also created friction with government agencies accustomed to fewer restrictions on technology use.
The gap between official policy statements and what's actually happening on the ground appears enormous. Pentagon officials issued warnings about Anthropic, yet the NSA is quietly expanding its use of the company's most powerful security tool. This disconnect raises important questions about how government agencies evaluate risk, balance innovation with caution, and coordinate their technology strategies across different departments. As AI becomes increasingly central to national security, these tensions will likely intensify rather than resolve.