OpenAI Revises Pentagon AI Deal Amid Surveillance Fears and Backlash
OpenAI is amending its hastily arranged contract to supply artificial intelligence to the US Department of War, following an admission from CEO Sam Altman that the initial deal appeared "opportunistic and sloppy." The ChatGPT owner has now explicitly barred its technology from being used for domestic mass surveillance or by defence intelligence agencies, such as the National Security Agency.
Deal Prompted by Anthropic's Exit and Ethical Concerns
The contract was struck almost immediately after the Pentagon dropped its existing AI contractor, Anthropic, which had insisted that using AI for mass domestic surveillance was incompatible with democratic values. This led US President Donald Trump to label Anthropic as "leftwing nut jobs" and direct federal agencies to cease using their technology. Despite OpenAI's denials that the agreement permitted surveillance, commentators raised concerns reminiscent of the 2013 Snowden scandal, where the NSA was found harvesting communications data.
The backlash was swift, with users on platforms like X and Reddit launching a "delete ChatGPT" campaign, accusing OpenAI of training a "war machine." Concurrently, Claude, the chatbot developed by Anthropic, surged to the top of Apple's App Store charts, surpassing ChatGPT in popularity, according to analysis by Sensor Tower.
Altman Admits Rushed Process and Complex Issues
In a message to employees reposted on X, Sam Altman acknowledged that the deal announced on Friday was rushed. He stated, "We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." OpenAI had initially claimed the contract included "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
Employee Protests and Ethical Red Lines
The use of AI by the US military has alarmed nearly 900 employees at OpenAI and Google, who signed an open letter urging their leaders to refuse the Department of War's requests for surveillance and autonomous killing capabilities. They warned that the government was attempting to "divide each company with fear that the other will give in," and called for unity in rejecting such uses. OpenAI has stated in a blogpost that one of its red lines is "no use of OpenAI technology to direct autonomous weapons systems."
Questions Over Ethical Compromises and Government Relations
Observers, including OpenAI's former head of policy research, Miles Brundage, have questioned how OpenAI secured a deal that addresses ethical concerns Anthropic deemed insurmountable. Brundage posted on X, suggesting that OpenAI may have "caved" and framed it as a non-concession, potentially undermining Anthropic. He emphasized the complexity of the organization and expressed distrust in some dealings with government and politics, adding that he would "rather go to jail" than follow an unconstitutional order.
Meanwhile, three more US cabinet-level agencies—the Departments of State, Treasury, and Health and Human Services—have moved to cease using Anthropic's AI products after the Department of War declared the company a supply chain risk. President Trump has ordered all US government agencies to phase out their use of Anthropic following Secretary of Defence Pete Hegseth's decision.
