US Military Reportedly Used Claude AI in Iran Strikes Despite Trump's Ban
The US military reportedly utilised Anthropic's Claude artificial intelligence model to inform its massive joint bombardment of Iran with Israel, despite former President Donald Trump's decision, announced mere hours earlier, to sever all federal ties with the company and its AI tools. This revelation, first reported by the Wall Street Journal and Axios, underscores the profound complexity of disentangling powerful AI technologies from military operations once they are deeply embedded.
Intelligence and Targeting Applications
According to detailed reports, US military command employed Claude for critical intelligence gathering, assistance in selecting precise targets, and conducting advanced battlefield simulations during the offensive that commenced on Saturday. This occurred despite Trump's Friday order, issued just before the attack began, which mandated all federal agencies to cease using Claude immediately. On his Truth Social platform, Trump vehemently denounced Anthropic as a "Radical Left AI company run by people who have no idea what the real World is all about".
Escalating Tensions Over Military Use
The current conflict traces its origins to January, when the US military used Claude during a controversial raid aimed at capturing Venezuelan President Nicolás Maduro. Anthropic publicly objected, citing its strict terms of service that prohibit applications for violent ends, weapons development, or surveillance purposes. Since that incident, relations between the Trump administration, the Pentagon, and the AI firm have deteriorated significantly.
In a lengthy social media post on Friday, Defense Secretary Pete Hegseth accused Anthropic of "arrogance and betrayal", asserting that "America's warfighters will never be held hostage by the ideological whims of Big Tech". Hegseth demanded unrestricted access to all of Anthropic's AI models for any lawful purpose, yet simultaneously acknowledged the practical challenges of an immediate separation.
Transition Period and Rival Involvement
Secretary Hegseth noted that Anthropic would continue providing services for a maximum of six months to facilitate a "seamless transition to a better and more patriotic service". This concession highlights the logistical difficulty of rapidly extracting AI tools from complex military systems. In the wake of the rupture with Anthropic, rival company OpenAI has moved to fill the void. CEO Sam Altman confirmed an agreement with the Pentagon for the use of OpenAI's tools, including ChatGPT, within its classified networks.
The situation vividly illustrates the ongoing tension between advancing military technology, corporate ethical policies, and political directives in modern warfare.
