Anthropic Sues US Defense Department Over AI Blacklisting Dispute
Anthropic Sues US Defense Over AI Blacklisting

Anthropic Sues US Defense Department Over AI Blacklisting Dispute

Anthropic, a leading artificial intelligence company, has initiated two lawsuits against the United States Department of Defense. The legal action challenges the Pentagon's recent decision to label the firm as a supply chain risk, a move Anthropic claims is both unlawful and infringes upon its constitutional rights.

Background of the Feud

The conflict stems from a prolonged disagreement between Anthropic and the Defense Department regarding the implementation of safeguards. These safeguards are designed to prevent the military's potential misuse of Anthropic's AI models, such as for mass domestic surveillance or the deployment of fully autonomous lethal weapons. The Pentagon formally issued the supply chain risk designation last Thursday, marking the first instance this blacklisting tool has been applied to a US-based company.

Legal Proceedings and Allegations

Anthropic filed the lawsuits in two federal courts: the northern district court of California and the US court of appeals for the Washington DC Circuit. The company argues that the Trump administration is penalizing it for refusing to comply with governmental ideological demands, which it asserts violates protected speech under the First Amendment. In its California lawsuit, Anthropic stated, "These actions are unprecedented and unlawful. The constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The designation requires any company conducting business with the government to sever all ties with Anthropic, posing a significant threat to its business operations. Despite this, Anthropic has emphasized its ongoing commitment to providing AI for national security purposes. The firm noted previous collaborations with the Department of Defense to tailor its systems for specific use cases and expressed a desire to continue negotiations.

Impact on Business and National Security

Anthropic's AI model, Claude, has been extensively integrated into Department of Defense operations over the past year. Until recently, it was the only AI model approved for use in classified systems, reportedly aiding in military decisions, including targeting missile strikes in conflicts such as the war against Iran. The company alleges that the punitive actions by the Trump administration and Pentagon are "harming Anthropic irreparably," contradicting earlier statements by CEO Dario Amodei, who downplayed the impact in a CBS News interview.

In its suit, Anthropic claimed, "Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation." A spokesperson for Anthropic added, "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners."

Government Response and Future Implications

The Department of Defense has not yet responded to requests for comment on the lawsuits. This case highlights broader tensions between technology firms and government agencies over ethical AI use and regulatory oversight. As Anthropic pursues legal avenues, it underscores the challenges in balancing innovation with national security concerns in the rapidly evolving AI landscape.

Pickt after-article banner — collaborative shopping lists app with family illustration