US Military Used Anthropic's AI Model Claude in Venezuela Raid, Report Reveals
US Military Used AI Model Claude in Venezuela Raid, Report Says

The US military utilised Anthropic's artificial intelligence model, Claude, during a high-stakes operation in Venezuela, according to a report by the Wall Street Journal published on Saturday. This incident marks a significant instance of the US Department of Defence incorporating AI into classified missions, raising questions about compliance with ethical guidelines and usage policies.

Operation Details and AI Deployment

The raid, which targeted the kidnapping of Venezuelan leader Nicolás Maduro, involved extensive bombing across Caracas and resulted in the deaths of 83 people, as reported by Venezuela's defence ministry. Claude, developed by Anthropic, is an AI model with capabilities ranging from document processing to piloting autonomous drones, though the specific manner of its deployment in this operation remains unclear.

A spokesperson for Anthropic declined to confirm whether Claude was used in the Venezuela raid but emphasised that any application of the tool must adhere strictly to the company's policies. These policies explicitly prohibit using Claude for violent purposes, weapon development, or surveillance activities.

Partnership with Palantir Technologies

The Wall Street Journal cited anonymous sources indicating that Claude was accessed through Anthropic's partnership with Palantir Technologies, a contractor that works closely with the US defence department and federal law enforcement agencies. Palantir has refused to comment on these allegations, leaving the exact nature of the collaboration uncertain.

This revelation underscores the deepening integration of AI in military arsenals globally. For instance, Israel's military has employed drones with autonomous features in Gaza and leveraged AI extensively for targeting databases. Similarly, the US military has utilised AI targeting systems in recent strikes across Iraq and Syria.

Ethical Concerns and Regulatory Calls

Critics have voiced strong warnings against the use of AI in weapons technologies, particularly highlighting risks associated with autonomous weapons systems. They point to potential targeting errors where computer algorithms might incorrectly determine who should be eliminated, leading to unintended casualties and ethical breaches.

AI companies, including Anthropic, are grappling with how to engage with the defence sector responsibly. Anthropic's CEO, Dario Amodei, has advocated for robust regulation to mitigate harms from AI deployment. He has expressed particular concern over the use of AI in autonomous lethal operations and surveillance within the US, advocating for a cautious approach.

Defence Department's Stance and Future Collaborations

This cautious stance appears to have caused friction with the US defence department. In January, Secretary of War Pete Hegseth stated that the department would not employ AI models that hinder warfighting capabilities, suggesting a preference for less restrictive technologies.

In a move signalling ongoing AI adoption, the Pentagon announced in January that it would collaborate with xAI, a company owned by Elon Musk. Additionally, the defence department uses customised versions of Google's Gemini and OpenAI systems to support various research initiatives, indicating a broader trend towards AI integration in military operations.

The use of Claude in the Venezuela raid highlights the complex interplay between technological advancement and ethical boundaries in modern warfare. As AI continues to evolve, debates over its regulation and application in defence contexts are likely to intensify, with significant implications for global security and policy frameworks.