Anthropic's Pentagon Feud Intensifies Over AI Warfare Ethics and Accountability
This week has witnessed escalating chaos in the high-stakes feud between the Pentagon and Anthropic, an artificial intelligence firm once considered a quieter player in the AI boom. The conflict centers on Anthropic's refusal to permit its Claude chatbot to be utilized for domestic mass surveillance and autonomous weapons systems capable of killing without human input. Amid tense negotiations, the AI company rejected a Pentagon deadline for a deal last week, prompting Defense Secretary Pete Hegseth to accuse Anthropic of "arrogance and betrayal" of the United States. Hegseth demanded that any companies collaborating with the U.S. government cease all business with Anthropic, marking a dramatic escalation in the dispute.
Fallout and Financial Consequences
The fallout has been swift and severe. OpenAI announced its own deal with the Department of Defense, leading to employee pushback and Anthropic CEO Dario Amodei accusing rival Sam Altman of offering "dictator-style praise" to Donald Trump, a remark for which Amodei later apologized. Trump himself denounced Anthropic in an interview with Politico, stating he "fired them like dogs." On Thursday, the DoD formally declared Anthropic a supply-chain risk, instructing other businesses to cut ties—the first time an American company has ever been targeted with this designation. This move poses grave financial consequences for Anthropic if fully enacted, potentially isolating it from key government and defense contractor partnerships.
The feud has intensified an unsettled debate over how AI will be deployed in warfare and who will be held accountable for the outcomes. It also represents one of the most dramatic disagreements to date between the tech industry and the Trump administration. As the military rapidly adopts AI technology for operations, including in conflicts with Iran, previously hypothetical ethical dilemmas have become real-world tests for AI companies like Anthropic.
Inherent Contradictions and Red Lines
Anthropic's standoff with the Pentagon highlights inherent contradictions within the company. Founded on the premise of creating a safe future for AI, Anthropic has nevertheless struck major partnerships for classified work with the Pentagon and surveillance tech giant Palantir. Its leadership expresses deep concern about existential AI risks, yet recently dropped a founding safety pledge, citing the speed of industry competition. The company pledges transparency but has developed its models through a rapacious demand for proprietary data, with court records documenting a secretive effort to scan and destroy millions of physical books to train Claude.
Despite these contradictions, recent weeks have revealed red lines that Anthropic appears unwilling to cross, a rarity in a tech industry largely subservient to the Trump administration and fearful of falling behind rivals. The fallout from its resistance has so far been a public relations victory, with Claude surging in popularity after the deal collapsed and OpenAI faced reputational challenges. However, longer-term implications remain uncertain, with some defense contractors and U.S. departments stepping away from using its AI models, and the Trump administration intent on punishing Anthropic for its dissent. Anthropic plans to challenge its supply-chain risk designation in court, while Amodei has reportedly reopened negotiations with the DoD in recent days.
The 'Safety-First' AI Company and Its Origins
Before clashing with Sam Altman and the Pentagon, Dario Amodei was a leading researcher at OpenAI, joining in 2016 after a stint at Google and eventually becoming vice-president of research. His sister Daniela served as vice-president of safety and policy. In 2021, prior to ChatGPT's release, the Amodeis broke away to found Anthropic, branding it as an "AI safety and research company" dedicated to building safer AI systems guided by a constitutional set of principles.
In 2024, Amodei published an essay titled "Machines of Loving Grace," outlining a utopian vision where AI could eliminate most cancers, prevent infectious diseases, and reduce economic inequality. However, he expressed skepticism about AI advancing democracy and peace. Amodei, who holds a doctorate in biophysics from Princeton, has long been concerned about existential AI risks, drawing parallels to nuclear weapons. His focus aligns with elements of the "effective altruism" movement, which advocates for projects maximizing global good, though the Amodeis deny being adherents. Critics, including researchers like Sarah Kreps of Cornell University, argue that existential risk concerns distract from more tangible AI harms and biases.
From Safety-First to Targeted Military Applications
While Anthropic vowed to build safer AI, it pursued enterprise software solutions, making Claude the preferred choice for organizational infrastructure, including military systems. Integration began with a 2024 deal with Palantir, allowing Claude to be used in classified environments, followed by a $200m DoD deal in 2025. These agreements lacked permanent terms on safety guardrails, leading to the current dispute as the government requested looser restrictions for broader use.
The standoff has gained a political dimension due to Anthropic's hiring of former Biden staffers, Amodei's opposition to Trump, and Hegseth's desire to eradicate "wokeness" from the military. Urgency is heightened by the U.S. military's use of Claude in operations, including missions to capture Venezuelan leader Nicolás Maduro and in the war with Iran. Reports indicate Claude is embedded in Palantir's Maven smart system to determine bombing sites in Iran and analyze strikes.
Dual-Use Technologies and Regulatory Gaps
The feud exemplifies problems around dual-use technologies, where products have both civilian and military applications. AI developed for broad consumer use and adapted for classified systems can hit fault lines, as companies may ethically oppose repurposing but have limited control. Experts note that tech companies lack perfect insight into how their technologies are used in classified systems, while the military does not fully understand proprietary AI workings—a "double black box" issue highlighted by law professor Ashley Deeks.
Contracts can be fuzzy, especially under the Trump administration's distaste for legal oversight. Deeks explains, "There is an expectation, generally, that parties to a contract are supposed to comply with the contract. But, of course, contracts need to be interpreted and the military might interpret a phrase one way where the company intended it to mean something else." Hanging over the feud is the broader question of who should decide AI's military applications, amid a lack of detailed congressional regulation on autonomous weapons systems. Currently, Anthropic functions as one of the few checks on the military's expansive desires for weaponizing AI, raising critical public questions about risk, confidence, and ethical boundaries in warfare.



