A federal judge in San Francisco has blocked the Trump administration's effort to blacklist Anthropic, the AI company behind Claude, ruling that the government likely violated the company's free speech rights. Judge Rita Lin issued a preliminary injunction on Thursday, preventing the Defense Department from enforcing its designation of Anthropic as a "supply chain risk" while the lawsuit proceeds. What Triggered This Legal Battle? The conflict began in late February when Defense Secretary Pete Hegseth declared Anthropic a supply chain risk, a designation historically reserved for foreign adversaries. This label requires defense contractors, including Amazon, Microsoft, and Palantir, to certify they do not use Claude in their military work. Anthropic is the first American company to publicly receive this designation. The blacklist followed failed contract negotiations between Anthropic and the Pentagon over how Claude would be deployed on the DOD's GenAI.mil platform. The two sides disagreed on critical terms: the Pentagon wanted unfettered access to Claude across all lawful purposes, while Anthropic sought assurances that its technology would not be used for fully autonomous weapons or domestic mass surveillance. President Donald Trump amplified the pressure with a Truth Social post ordering federal agencies to "immediately cease" all use of Anthropic's technology, with a six-month phase-out period. Trump criticized Anthropic as an "out-of-control, Radical Left AI company" run by people with no understanding of the real world. Why Did the Judge Rule in Anthropic's Favor? Judge Lin's decision centered on free speech protections and the government's authority to blacklist companies. During Tuesday's hearing, the judge pressed government lawyers about the rationale for the blacklist. Her written order was sharply critical of the administration's actions. Lin wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." She also noted that "punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation". Lin The judge acknowledged that the Pentagon has the right to stop using Claude and seek alternative AI vendors. However, she distinguished between that legitimate choice and the government's method of enforcement. "I see the question in this case as being a very different one, which is whether the government violated the law," Lin stated. How to Understand the Legal Road Ahead - Two Separate Lawsuits: The Trump administration relied on two distinct federal statutes to justify the blacklist, requiring Anthropic to challenge the designation in two separate courts. One case is in federal court in San Francisco; the other is in the U.S. Court of Appeals in Washington. - Preliminary vs. Final Ruling: This injunction is preliminary, meaning it pauses the government's actions while the case proceeds. A final verdict could take months, but the judge's language suggests Anthropic is likely to succeed on the merits. - Broader Implications: The ruling raises questions about how the U.S. government can regulate AI companies and whether national security concerns justify blacklisting American firms without due process. What Does This Mean for Anthropic and the AI Industry? Anthropic issued a statement expressing gratitude for the court's swift action: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI". The preliminary injunction is a significant victory for Anthropic, which had signed a $200 million contract with the Pentagon in July before the relationship deteriorated. The company was the first to deploy its models across the DOD's classified networks and was championed for its ability to integrate with existing defense contractors like Palantir. The case highlights a fundamental tension in AI governance: how to balance national security concerns with the rights of American companies to operate freely and challenge government decisions. As AI becomes increasingly central to military and civilian infrastructure, this dispute may set a precedent for how future conflicts between tech companies and the government are resolved.