The Iran School Bombing and the Danger of Blaming AI
A memorial for the victims of the Shajareh Tayyebeh primary school bombing in Tehran stands as a sombre reminder of a tragic event. In the aftermath, discussions have emerged about the role of artificial intelligence in modern warfare, with some attributing blame to AI systems. However, this perspective risks masking a more profound issue: the erosion of human accountability in decision-making processes.
The Language of Displacement: From Euphemisms to Automation
In a letter to the editor, Anthony Lawton from Market Harborough, Leicestershire, highlights that blaming AI for the Iran school bombing represents a worrying trend. He argues that using terms like "AI error" subtly removes human subjects from sentences, much like past euphemisms such as "dehoused" or "collateral damage" obscured responsibility. This linguistic shift displaces accountability from people to systems, complicating moral scrutiny.
Lawton emphasises that, regardless of the complexity in analysis and command chains, it is ultimately human beings who design, authorise, and execute decisions involving technology. Obscuring this fact is not merely a technical oversight but a civic failure. AI may accelerate warfare, but it also accelerates a shift where automation serves as an alibi, hindering public ability to hold individuals accountable.
Anthropomorphic Language and Moral Agency in AI
Dr Felicity Mellor, Director of the Science Communication Unit at Imperial College London, echoes these concerns in another letter. She criticises the anthropomorphic language used to describe AI behaviours, such as agents "conniving," "lying," or "cheating." Terms like "scheming" ascribe moral agency to large language models, which can obscure where true responsibility lies.
Mellor draws an analogy: if a company released high-speed vehicles without effective brakes, we would blame the humans for recklessness, not the vehicles for "conniving." Similarly, if out-of-control AI causes harm, attributing moral agency to the technology rather than its creators—tech companies and promoting governments—undermines accountability. Clear language is essential to ensure proper scrutiny and justice.
The Broader Implications for Public Scrutiny and Ethics
Both letters underscore that public language must accurately name human responsibility to enable effective public scrutiny. As AI integrates into military and civilian spheres, maintaining clarity about who acts is crucial for moral accountability. This issue extends beyond the Iran bombing to global conflicts and technological advancements, where ethical oversight is paramount.
In summary, while AI presents challenges in warfare and beyond, the core problem remains human-driven. By focusing on language and accountability, society can better address the ethical dimensions of technology, ensuring that progress does not come at the cost of moral clarity.



