OpenAI CEO Concedes Lack of Authority Over Pentagon's AI Applications
Sam Altman, the CEO of OpenAI, has openly acknowledged that his company holds no sway over how the United States Pentagon utilises its artificial intelligence products in military operations. This admission came during a session at the AI Impact summit held at Bharat Mandapam in New Delhi, India, on 19 February 2026, and was subsequently detailed in internal communications to employees on Tuesday. Altman's statements emerge against a backdrop of heightened scrutiny regarding the ethical implications of AI deployment in warfare, coupled with growing unease among AI industry workers about the potential misuse of their technologies.
Escalating Tensions in AI-Military Collaborations
According to reports from Bloomberg and CNBC, Altman explicitly informed OpenAI staff that they "do not get to make operational decisions" concerning military actions. He illustrated this point by referencing contentious scenarios, such as the Iran strike and the Venezuela invasion, emphasising that employees have no input on such matters. This revelation coincides with intense negotiations in the AI sector, as the Pentagon has been pressuring companies to remove safety protocols from their models to facilitate broader military applications. Notably, AI-enabled systems have reportedly been employed in operations targeting Venezuelan leader Nicolás Maduro and in decision-making processes during conflicts with Iran.
Anthropic's Ethical Stand and OpenAI's Controversial Deal
In a contrasting move, Anthropic, OpenAI's rival and the creator of the Claude chatbot, recently declined a partnership with the Pentagon due to ethical concerns, including fears that its technology could be exploited for domestic mass surveillance or autonomous weaponry. This refusal prompted US Defense Secretary Pete Hegseth to label Anthropic a "supply-chain risk," a designation unprecedented for a US company and one that could inflict severe financial repercussions if formally implemented. On the same day, the Pentagon announced a new agreement with OpenAI, seemingly intended to replace Anthropic's technology in military contexts. The timing of this deal, perceived as OpenAI crossing ethical boundaries that Anthropic upheld, has sparked both public criticism and internal dissent among OpenAI employees.
Damage Control and Industry Backlash
In response to the backlash, Altman and OpenAI have attempted to reassure stakeholders by asserting that their technology will be used lawfully, while Altman conceded that the deal was hastily arranged, making the company appear "opportunistic and sloppy." Meanwhile, Anthropic's CEO, Dario Amodei, launched a scathing critique in a memo to employees, accusing Altman of being "mendacious" and alleging that he offered "dictator-style praise to Trump." Amodei further contended that the Pentagon and the Trump administration's disapproval stemmed from Anthropic's lack of political donations, contrasting it with OpenAI president Greg Brockman's substantial contributions to a pro-Trump political action committee.
Broader Implications for AI Ethics and Regulation
The ongoing disputes highlight deepening ethical rifts within the AI industry, as companies grapple with balancing innovation against moral responsibilities in military collaborations. With the Pentagon's increasing reliance on AI for strategic operations, the debate over corporate accountability and regulatory oversight is intensifying, raising critical questions about the future governance of artificial intelligence in defence and security contexts.



