A federal judge has temporarily blocked the Trump administration's attempt to exclude Anthropic, the AI company behind Claude, from doing business with US government agencies. The preliminary injunction, issued on March 26, 2026, suspends enforcement of a controversial "supply chain risk" designation for one week while the court considers Anthropic's constitutional challenge. What Led to This Legal Showdown Between Anthropic and the Pentagon? The conflict began when Anthropic and the US Department of Defense (DOD) ended their partnership over fundamental disagreements about AI safety. Anthropic had been cooperating with the Pentagon on artificial intelligence initiatives, but negotiations broke down when the DOD demanded unrestricted use of all legal AI applications. Anthropic refused, citing its ethical guidelines that prohibit using AI for large-scale surveillance of Americans or developing fully autonomous weapons systems. Rather than simply terminating the contract, the DOD took an unprecedented step: it designated Anthropic as a "supply chain risk," a classification previously reserved for foreign adversaries like China's Huawei and Russia's Kaspersky. This designation effectively banned government agencies from using Anthropic's products and services. The move sparked immediate backlash, with critics arguing the government was weaponizing national security classifications against a domestic company for policy disagreements. Why Did the Judge Rule in Anthropic's Favor? Federal Judge Rita Lynn, who issued the preliminary injunction, delivered a sharp rebuke to the government's legal position. She stated that the designation appeared to violate constitutional principles protecting free speech and due process. Judge Lynn emphasized that there was no legitimate legal basis for treating a domestic company as a potential saboteur simply because it challenged government policy. During the hearing, Judge Lynn directly questioned government lawyers about their reasoning, asking why they chose to designate Anthropic as a supply chain risk when they could have simply ended the contract. The government provided no satisfactory answer. Judge Lynn concluded that the government had not provided any credible evidence that Anthropic could pose a sabotage threat, and that the designation appeared "arbitrary and whimsical". Judge Lynn How Are Other Tech Giants Responding to This Dispute? The situation has created unusual alliances in Silicon Valley. After the DOD designated Anthropic as a supply chain risk, the Pentagon contracted with OpenAI instead, a move that triggered a "Cancel ChatGPT" movement among users and employees who opposed the decision. However, OpenAI CEO Sam Altman took a surprising stance, reportedly telling employees that OpenAI would work to help Anthropic escape the supply chain risk designation. "We believe we can offer Anthropic a way to get off the supply chain risk designation. In my view, it's strange that we're going to such lengths to save a competitor who has tried to destroy OpenAI for years. The situation is complex, but I promise we will act on principles, not appearances," Altman stated. Sam Altman, CEO at OpenAI Meanwhile, Google and Amazon pledged cooperation with the DOD in non-defense-related areas following the Anthropic designation, attempting to balance their government relationships with broader industry concerns. What Are the Key Issues at Stake in This Case? This legal battle raises several critical questions about government power, AI safety, and corporate accountability: - Constitutional Rights: Whether the government can use national security classifications to punish companies for refusing to comply with policy demands, potentially violating free speech and due process protections. - AI Safety Standards: Whether companies should be allowed to refuse government contracts based on ethical concerns about how their AI systems might be used, particularly regarding surveillance and autonomous weapons. - Precedent for Domestic Companies: Whether supply chain risk designations, historically used only against foreign adversaries, can be applied to American companies without clear evidence of actual security threats. Anthropic issued a statement expressing gratitude for the court's swift action and confidence in its legal position. The company emphasized that while the lawsuit was necessary to protect itself, its customers, and its partners, its primary focus remains on working constructively with the government to ensure all Americans benefit from safe and reliable AI. The DOD, for its part, has defended its position by explaining that the disagreement extended beyond policy differences. Pentagon officials noted frustration with Anthropic's internal approval processes, which required government agencies to seek permission from Anthropic before making exceptions for emergency use cases. The DOD also disputed characterizations that it sought large-scale surveillance, arguing that Anthropic's public statements misrepresented the Pentagon's actual intentions. What Happens Next in This Case? The preliminary injunction gives the court one week to consider Anthropic's full legal challenge to the supply chain risk designation. If the court rules in Anthropic's favor on the merits, the designation could be permanently overturned. If the government prevails, the designation would remain in effect, effectively excluding Anthropic from federal contracts. The outcome will likely set important precedent for how the government can use national security tools in disputes with domestic technology companies.