The Messy Reality of AI Companies Working With the Pentagon: Why Neither Side Is Ready

The U.S. government and AI companies are colliding over defense contracts, but neither side has figured out how to manage the relationship responsibly. When OpenAI CEO Sam Altman announced on social media that his company would take over a Pentagon contract that Anthropic had just rejected, he triggered an immediate backlash from employees and users over concerns about mass surveillance and automated weaponry. The conflict exposed a deeper problem: as AI becomes critical national security infrastructure, both the tech industry and Washington are operating without clear rules or expectations .

What Exactly Happened Between OpenAI, Anthropic, and the Pentagon?

The sequence of events unfolded rapidly in early March 2026. Anthropic, an AI safety-focused company, walked away from a Pentagon contract because the Department of Defense refused to accept contractual limitations on surveillance and automated killing systems. Hours later, OpenAI announced it had won the same contract. Defense Secretary Pete Hegseth then threatened to designate Anthropic as a "supply-chain risk," a designation that would effectively cut the company off from hardware and hosting partners and potentially destroy it .

When Altman held a public question-and-answer session on X to explain the decision, he faced pointed questions about whether OpenAI was comfortable participating in mass surveillance and autonomous weapons systems. His responses revealed the core tension: he argued that setting national security policy was not his role, and that elected leaders, not private companies, should make those decisions. Yet his willingness to accept the contract without the same safeguards Anthropic had demanded suggested OpenAI was willing to defer to government demands more readily than its competitor .

Why Is This Such a Problem for the Tech Industry?

The threat against Anthropic represents an unprecedented move against an American company. According to former Trump official Dean Ball, the administration was attempting to change contract terms that had been established years earlier, a tactic that would never be acceptable between private companies. The message sent to other vendors is clear: disagreeing with the government on defense contracts carries existential risk .

This creates a chilling effect across the entire industry. Companies now face a choice between accepting government demands without pushback or risking retaliation. OpenAI, despite winning the contract, finds itself caught between two opposing forces: employees demanding ethical guardrails and right-wing media scrutinizing whether the company is sufficiently loyal to the Trump administration. There are no neutral positions available .

How Should AI Companies Navigate Government Relationships?

The defense industry has operated under a specific model for decades that AI companies have not yet adopted. Consider these key differences:

  • Regulatory Structure: Traditional defense contractors like Raytheon and Lockheed Martin operate as slow-moving, heavily regulated conglomerates that provide political cover by staying focused on technology rather than politics.
  • Long-Term Stability: Established defense firms can weather political transitions because they have institutional relationships and regulatory frameworks that transcend individual administrations.
  • Startup Vulnerability: AI companies like OpenAI and Anthropic move faster than their predecessors but lack the political insulation needed to survive when political winds shift, leaving them exposed to retaliation or abandonment.

The fundamental problem is that AI companies are being forced to play a game they did not sign up for. OpenAI built its business on consumer applications and investor enthusiasm, not on being an industrial wing of the Pentagon. Yet the scale of its ambitions and the intensity of capital requirements mean it cannot avoid serious government engagement .

What Does This Mean for the Future of AI and National Security?

The current situation reveals that both the government and AI companies are unprepared for the relationship they are entering. When Altman testified before Congress in 2023, he could follow a social media playbook: be bombastic about world-changing potential while acknowledging risks and engaging enthusiastically with lawmakers. That approach worked for steering investors and heading off regulation. Less than three years later, it is no longer tenable .

The Trump administration appears willing to use its power to force compliance through threats of designation and blacklisting. Meanwhile, tech investors aligned with the administration seem content with what former Trump adviser David Sacks and others have called "tribal logic," where political loyalty matters more than free enterprise principles. Few are willing to defend the broader principle that companies should not be destroyed for disagreeing with government demands .

OpenAI may benefit in the short term from its willingness to accept the Pentagon contract, but the company remains vulnerable. When political winds inevitably shift, the precedent of accepting government demands without safeguards could become a liability. The lack of a clear, stable framework for how AI companies should work with the government means that every contract negotiation becomes a political minefield, and every decision carries the risk of alienating either employees, users, or the administration in power .