Pentagon Cancels Anthropic AI Contract Amid National Security Dispute
Pentagon Cancels Anthropic AI Contract in Security Row

Pentagon Terminates Anthropic Contract in High-Stakes AI Dispute

A major conflict over the military's use of artificial intelligence has erupted publicly, leading the Pentagon to cancel its contract with AI company Anthropic. Defense Secretary Pete Hegseth abruptly ended the Department of Defense's collaboration with Anthropic and other agencies, invoking a law designed to counter foreign supply chain threats to label the U.S.-based firm a risk.

National Security Accusations and Legal Challenges

President Donald Trump and Hegseth have accused Anthropic of endangering national security after CEO Dario Amodei refused to compromise on concerns that the company's products could facilitate mass surveillance or autonomous armed drones. Anthropic, based in San Francisco, has vowed to sue over the supply chain risk designation, calling it an unprecedented and legally unsound action against an American company.

The looming legal battle could significantly impact the balance of power in Big Tech during a critical period, influencing rules for military AI use and safety guardrails to prevent threats to human life. This dispute has already benefited OpenAI, which seized the opportunity to offer its technology to the Pentagon after Anthropic objected to certain terms from the Trump administration.

Implications of the Supply Chain Risk Designation

The Pentagon's move to designate Anthropic as a supply chain risk will terminate its contract worth up to $200 million and prohibit other defense contractors from doing business with the AI firm. Trump mandated that most government agencies stop using Anthropic's AI immediately, but granted the Pentagon a six-month grace period to phase out embedded technology.

Anthropic argues that Hegseth lacks the legal authority to disrupt its relationships with other defense contractors, noting that commercial contracts for non-defense projects can continue. The supply chain risk list typically includes firms with ties to adversaries like Huawei or Kaspersky, but applying it to Anthropic serves as a warning to AI and defense companies about compliance with government demands.

Business Impact and Industry Repercussions

Anthropic has not yet been formally notified of the designation but plans to challenge it in court. The company is clarifying that the risk designation only affects the use of its Claude AI chatbot for Department of Defense work, not other purposes. This distinction is crucial, as Anthropic projects $14 billion in revenue this year, with over 500 customers paying at least $1 million annually for Claude.

Claude's success has positioned it as a viable replacement for business software from companies like Salesforce and Workday, causing stock declines in that sector. However, the risk designation may create uncertainty among customers using Claude for non-military work, potentially slowing AI advancement in the U.S. as the country competes with China.

OpenAI Steps In Amidst Rivalry

Following Anthropic's punishment, OpenAI CEO Sam Altman announced a deal with the Pentagon to supply AI to classified military networks, while adhering to restrictions on mass surveillance and autonomous weapons. It remains unclear why the Pentagon accepted OpenAI's terms but not Anthropic's. OpenAI's agreement coincided with a $110 billion funding round, valuing the company at $730 billion, but it risks backlash if perceived as prioritizing profit over safety.

The rift may also open opportunities for other tech figures like Elon Musk, who oversees the Grok AI chatbot and has supported the Trump administration in this dispute. Google, with its Gemini AI tools, could seek more military business, though internal pressures urge adherence to ethical standards.

Anthropic and Amodei continue to advocate for stronger AI guardrails, emphasizing their commitment to democratic values and challenging the designation in court.