Anthropic has submitted sworn court filings pushing back against Pentagon allegations that the company could sabotage its Claude AI model once deployed in military operations. The dispute centers on a fundamental question about AI safety and control: once an AI system is handed over to the military, can the company that built it still influence or disable it? The answer, according to Anthropic's legal team, is no. What Is the Pentagon Actually Accusing Anthropic Of? The Department of Defense (DOD) has raised concerns that Anthropic could manipulate Claude models in the middle of military operations, potentially compromising national security. This accusation emerged as part of broader tensions between the Trump administration and Anthropic, with the Pentagon declaring the relationship posed an "unacceptable risk to national security". The timing is significant: these court filings came just one week after Trump administration officials declared the Pentagon-Anthropic relationship effectively over. The core concern appears to stem from how modern AI systems work. Unlike traditional software with clear ownership boundaries, large language models (LLMs) like Claude exist in a gray zone where the company that created them might theoretically retain some ability to update, modify, or even disable them remotely. The Pentagon's worry is that Anthropic could use this capability to sabotage military operations at a critical moment. How Does Anthropic Say It Protects Against Model Manipulation? Anthropic's defense rests on a technical argument: once Claude is deployed to military systems, the company cannot manipulate it. According to the company's sworn declarations, the architecture of deployed AI models makes remote tampering impossible. This is a critical distinction between cloud-based AI services, where a company retains control, and locally deployed models, where control transfers to the user. The company's legal filing suggests that Anthropic understands the Pentagon's concern but argues the concern is based on a misunderstanding of how deployed AI systems actually work. Once the military has Claude running on its own servers and infrastructure, Anthropic loses the ability to modify it remotely, just as Microsoft cannot remotely disable a copy of Windows running on your computer without your permission. Steps to Understanding AI Deployment Security in Defense Systems - Cloud-Based vs. Deployed Models: Cloud-based AI services like ChatGPT remain under the company's control and can be updated or disabled remotely. Deployed models, installed directly on military servers, operate independently once installed and cannot be remotely manipulated by the original developer. - Trust and Verification: The Pentagon's concerns highlight why military institutions need independent verification that deployed AI systems cannot be compromised by their creators. This requires technical audits and security assessments before deployment. - Contractual Safeguards: Defense contracts typically include provisions that specify exactly what access the AI company retains after deployment, ensuring clear boundaries between the developer and the military operator. The broader context here matters significantly. Anthropic has been positioning Claude as an enterprise-grade AI system suitable for sensitive applications, including government use. The company has emphasized safety and alignment, arguing that Claude is designed to be more cautious and controllable than competing models. However, the Pentagon's skepticism suggests that even strong safety credentials may not be enough to overcome institutional concerns about foreign influence or corporate control over military AI systems. This dispute also reflects a larger geopolitical tension. The Trump administration has taken a harder line on AI companies working with the military, and Anthropic, despite being a US-based company, has faced scrutiny over its funding sources and international connections. The Pentagon's concerns about "unacceptable risk" may stem as much from political considerations as from technical ones. What Does This Mean for the Future of AI in Defense? The Pentagon's decision to adopt Palantir's Maven AI system as an official program of record suggests the military is moving toward AI solutions from companies with deeper defense industry roots. This could signal a shift away from partnerships with newer AI companies like Anthropic, regardless of their technical capabilities. The military may prefer working with established defense contractors that have existing security clearances and institutional relationships with the Pentagon. For Anthropic, the stakes are significant. The company has invested heavily in building Claude's reputation as a safe, reliable AI system. Losing military contracts would be a blow to its credibility and revenue potential. However, the company's legal strategy suggests it is prepared to fight the Pentagon's characterization, arguing that its technical architecture actually provides the security guarantees the military is seeking. The outcome of this dispute could reshape how AI companies approach military contracts. If Anthropic prevails, it may establish a precedent that deployed AI models are inherently secure from remote manipulation. If the Pentagon's concerns are validated, it could lead to new regulatory requirements or contractual provisions that make it harder for AI companies to work with defense agencies. Either way, this case highlights the tension between innovation and security in an era when artificial intelligence is becoming central to military operations.