Federal Judge Blocks Pentagon from Labelling Anthropic as Supply Chain Risk
Judge Blocks Pentagon from Labelling Anthropic Supply Chain Risk

Federal Judge Halts Pentagon's Designation of Anthropic as Supply Chain Risk

A federal judge has issued a temporary ruling in favour of artificial intelligence company Anthropic, blocking the Pentagon from labelling the firm as a supply chain risk. The decision, announced by U.S. District Judge Rita Lin on Thursday, also halts a directive from President Donald Trump that ordered all federal agencies to immediately cease using Anthropic's services.

Court Proceedings and Judicial Scrutiny

The ruling follows a ninety-minute hearing held in San Francisco federal court on Tuesday. During the proceedings, Judge Lin rigorously questioned the Trump administration's extraordinary step of denouncing Anthropic as a supply chain risk. This controversial action reportedly emerged after defence contract negotiations deteriorated, primarily due to Anthropic's firm insistence on preventing its AI technology from being deployed in fully autonomous weapons systems or for the surveillance of American citizens.

Anthropic, renowned for developing the Claude chatbot, had urgently sought an emergency court order to remove what it described as an unjustifiable and damaging stigma. The San Francisco-based company alleges this designation was applied as part of what it calls an "unlawful campaign of retaliation", prompting it to file a lawsuit against the Trump administration earlier this month.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Legal Arguments and Constitutional Claims

The Pentagon had previously argued that it should retain the authority to use Claude in any manner it deems lawful under existing statutes. In contrast, Anthropic presented compelling legal arguments alleging violations of its constitutional rights. The company claimed its First Amendment free speech rights were infringed through retaliation against its publicly stated AI safety views. Additionally, Anthropic asserted a Fifth Amendment due process breach, stating it was systematically denied any meaningful opportunity to formally dispute the damaging supply chain risk designation before it was imposed.

Judge Lin clarified that her ruling specifically addressed the government's procedural actions rather than the underlying public policy debate about AI ethics and military applications. "If the genuine concern is maintaining the integrity of the operational chain of command, the Department of War could simply choose to stop using Claude. Instead, these comprehensive measures appear deliberately designed to punish Anthropic," Lin wrote in her detailed judicial opinion.

Scope of the Ruling and Ongoing Litigation

Importantly, Judge Lin specified that her temporary order is delayed for one week to allow for administrative adjustments. The ruling does not mandate that the Pentagon must continue using Anthropic's products, nor does it prevent the Department of Defense from transitioning to alternative AI providers should it choose to do so through proper channels.

This case represents just one facet of the ongoing legal confrontation. Anthropic has also initiated a separate, more narrowly focused case that remains pending before the federal appeals court in Washington, D.C. The broader conflict highlights escalating tensions between technology companies advocating for ethical AI constraints and government agencies seeking unrestricted access to advanced artificial intelligence capabilities for national security purposes.

Pickt after-article banner — collaborative shopping lists app with family illustration