Anthropic vs Pentagon: Big Tech's Reversal on AI and Military Ties
Big Tech's AI Military Shift: Anthropic vs Pentagon

Anthropic-Pentagon Feud Exposes Big Tech's AI Military Reversal

The standoff between Anthropic and the Pentagon has forced the technology industry to confront a critical question: how should artificial intelligence products be utilized in warfare, and what ethical boundaries must be established? This clash underscores a dramatic shift in Silicon Valley's stance, moving from staunch opposition to military collaborations less than a decade ago to embracing lucrative defense contracts today, particularly under the influence of Donald Trump's administration.

Anthropic's Legal Battle with the Department of Defense

Anthropic's conflict with the Trump administration escalated significantly three days ago when the AI firm filed a lawsuit against the Department of Defense. The company alleges that the government's decision to blacklist it from federal work violates its First Amendment rights. This legal action follows months of tension, with Anthropic attempting to impose restrictions on its AI model, prohibiting its use for domestic mass surveillance or fully autonomous lethal weapons.

Anthropic argues that acquiescing to the Pentagon's demand for "any lawful use" of its technology would breach its foundational safety principles and potentially lead to abuse. This stance sets an ethical precedent that other industry players must now consider. The refusal to remove safety guardrails and the subsequent retaliation from the Pentagon have highlighted longstanding concerns about AI's role in conflict, demonstrating how the industry's goals have evolved.

Margaret Mitchell, an AI researcher and chief ethics scientist at Hugging Face, commented on the complexity of the situation, noting that simplistic moral judgments are inadequate in this context.

From Anti-Military Protests to Defense Contracts

Several factors contribute to big tech's newfound embrace of militarism. Alignment with the Trump administration, including public displays of loyalty from major CEOs, has tied technology firms to the government's ambition to expand military capabilities. The administration's pledge to overhaul federal agencies using artificial intelligence presents a significant opportunity for AI companies to integrate their products into government and military operations, securing long-term revenue.

Additionally, concerns over China's technological advancements and a global surge in defense spending have shifted industry attitudes. However, this represents a stark reversal from recent history. In 2018, thousands of Google employees protested against Project Maven, a program analyzing drone footage for the DoD, with over 3,000 workers signing an open letter stating, "We believe that Google should not be in the business of war." Google subsequently decided not to renew the project and implemented policies barring technology that could cause injury.

Since then, Google has suppressed employee activism, removed the 2018 language prohibiting weaponry technology, and signed numerous military contracts. In 2024, the company fired over 50 employees protesting its ties to the Israeli government, with CEO Sundar Pichai emphasizing that Google is a business, not a platform for political debates. Recently, Google announced it would provide its Gemini AI to the military for creating AI agents on unclassified projects.

OpenAI also lifted its blanket ban on military access in 2024, with its chief product officer now serving in the U.S. military's innovation corps. Alongside Google, Anthropic, and xAI, OpenAI signed a contract worth up to $200 million with the DoD last year to integrate technology into military systems. On the same day the defense secretary labeled Anthropic a supply chain risk, OpenAI secured a deal for its tech to be used in classified military systems.

Other companies, such as defense tech firm Anduril and surveillance tech maker Palantir, have made military partnerships central to their business models, influencing Silicon Valley's political leanings. Palantir, which took over the Project Maven contract after Google dropped it, has been a pioneer in military collaboration, with CEO Alex Karp advocating for closer integration between the tech industry and the U.S. military.

Anthropic's Nuanced Stance on AI and Warfare

Despite public praise for its standoff with the Pentagon, Anthropic's co-founder and CEO, Dario Amodei, has emphasized common ground with the government. In a blog post last Thursday, he wrote, "Anthropic has much more in common with the Department of War than we have differences." While the White House has criticized Anthropic as "a radical left, woke company," Amodei's views are not pacifistic.

In a January essay, Amodei warned of AI's potential harms, such as creating deadly bioweapons and threats from China, while arguing that democratic governments should be armed with advanced AI to combat autocratic adversaries. He expressed less concern about AI facilitating warfare and more about technology reliability and the risk of consolidation among a few individuals controlling autonomous drone armies.

Amodei's essay also addressed issues central to the Pentagon feud, including AI's potential for mass surveillance. He advocated for safeguards against abuse, stating that AI should be used for national defense "in all ways except those which would make us more like our autocratic adversaries."

Although Amodei has maintained the company's red lines, he has repeatedly expressed a desire for Anthropic to continue working with the Defense Department. The lawsuit reveals the extent of this collaboration, noting that Anthropic's Claude Gov model is less restrictive for military use, allowing applications like handling classified documents, military operations, and threat analysis.

The government has reportedly used Claude for target selection and analysis in bombing campaigns against Iran, a use-case Anthropic has not opposed. In his blog post, Amodei stated that Anthropic does not involve itself in military operational decisions but supports American warfighters and remains committed to providing technology. He told CBS News last week, "We have said to the department of war that we are OK with all use cases... basically 98 or 99% of the use cases they want to do, except for two."

This ongoing battle illustrates how big tech's relationship with the military has transformed, moving from ethical protests to complex negotiations over AI's role in modern warfare.