Pentagon Labels AI Firm Anthropic as Supply Chain Risk Effective Immediately
Pentagon Labels AI Firm Anthropic as Supply Chain Risk

The Trump administration has executed its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could compel other government contractors to cease using the AI chatbot Claude. This decision marks a significant escalation in tensions between the federal government and the tech industry over national security protocols.

Immediate Designation and National Security Concerns

The Pentagon issued a statement on Thursday confirming it has officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately. This action appears to close the door on further negotiations with Anthropic, coming nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security.

Trump and Hegseth announced a series of threatened punishments last Friday, coinciding with the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns that the company's products could be exploited for mass surveillance of Americans or autonomous weapons systems. The San Francisco-based company did not immediately respond to a request for comment on Thursday, having previously vowed to sue if the Pentagon pursued what it described as a legally unsound action never before publicly applied to an American company.

Contractor Reactions and Operational Impacts

Some military contractors have already begun severing ties with Anthropic, a rising star in the tech industry that sells Claude to various businesses and government agencies. Lockheed Martin stated it will follow the President's and the Department of War's direction and seek alternative providers of large language models.

The company emphasized that it expects minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of its work. It remains unclear whether the designation aims to block Anthropic's use by all federal government contractors or solely those partnering with the military.

Criticism and Legal Scrutiny

The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries has sparked swift criticism from both opponents and some supporters of Trump's Republican administration. Federal codes define supply chain risk as a risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert a system to disrupt, degrade, or spy on it.

U.S. Senator Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, condemned the move as a dangerous misuse of a tool meant to address adversary-controlled technology. She described it as reckless, shortsighted, self-destructive, and a gift to adversaries in a written statement on Thursday.

Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, characterized the decision as massive overreach that would harm both the U.S. AI sector and the military's ability to acquire the best technology for American warfighters.

Former Officials Express Grave Concerns

Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing serious concern about the designation. The letter, signed by former officials and policy experts including former CIA director Michael Hayden and retired Air Force, Army, and Navy leaders, argued that using this authority against a domestic American company represents a profound departure from its intended purpose and sets a dangerous precedent.

They emphasized that such a designation is meant to protect the United States from infiltration by foreign adversaries, such as companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons constitutes a category error with consequences extending far beyond this specific dispute.

Market Dynamics and Rivalry Intensification

While losing significant partnerships with defense contractors, Anthropic has experienced a surge in consumer downloads over the past week as people side with its moral stance. The company has reported more than a million people signing up for Claude each day this week, propelling it past OpenAI's ChatGPT and Google's Gemini as the top AI app in over 20 countries within Apple's app store.

The dispute with the Pentagon has further deepened Anthropic's bitter rivalry with OpenAI, which announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments. OpenAI CEO Sam Altman later acknowledged that he should not have rushed a deal that appeared opportunistic and sloppy, reflecting the complex competitive landscape in the AI sector.

The Pentagon did not reply to questions in time for publication, leaving many aspects of this unfolding situation unresolved. This designation represents a landmark moment in the intersection of technology, national security, and government regulation, with potential ramifications for the entire AI industry and federal contracting practices.