Human Error, Not AI, Caused US Bombing of Iranian School in Operation Epic Fury
Human Error, Not AI, Caused US Bombing of Iranian School

Human Error, Not AI, Caused US Bombing of Iranian School in Operation Epic Fury

On the morning of 28 February 2026, American forces launched Operation Epic Fury, striking the Shajareh Tayyebeh primary school in Minab, southern Iran, at least twice during the morning session. The attack killed between 175 and 180 people, predominantly girls aged seven to 12. In the aftermath, media coverage fixated on whether Claude, an Anthropic chatbot, had selected the school as a target, but this narrative obscured the real cause: human bureaucratic failures over many years.

The Misplaced Focus on AI

Within days, Congress questioned US Secretary of Defense Pete Hegseth about AI use in the strikes, while publications like The New Yorker speculated on Claude's trustworthiness in combat. However, these discussions had little basis in reality. The targeting for Operation Epic Fury relied on the Maven system, a project initially contested in Silicon Valley. In 2018, over 4,000 Google employees protested the company's Pentagon contract for AI targeting, leading Google to abandon it. Palantir Technologies, co-founded by Peter Thiel, then took over, spending six years developing Maven into a targeting infrastructure that integrates satellite imagery, signals intelligence, and sensor data.

The school had been misclassified as a military facility in a Defense Intelligence Agency database, which CNN reported had not been updated to reflect its conversion from an adjacent Islamic Revolutionary Guard Corps compound to a school by 2016 at the latest. A chatbot did not cause this tragedy; people failed to update a database, and others built a system that made such failures lethal. By the Iran war's start, Maven had become embedded in military infrastructure, yet public debate centred on Claude, illustrating what scholar Morgan Ames terms "charismatic technology"—where a technology like large language models (LLMs) distracts from underlying issues.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The Kill Chain and Bureaucratic Failures

The real question involves the "kill chain," the bureaucratic framework for moving from detection to destruction. This concept dates to the 1760s with French artillery reforms and has evolved through management fads. Palantir's Maven Smart System represents the latest compression of this process, emerging from the "third offset strategy" announced in 2014 by Defense Secretary Chuck Hagel and Deputy Robert Work. This strategy aimed to use technology for faster decision-making, overwhelming adversaries with operational tempo.

In 2017, Project Maven was established to address information overload from surveillance drones. After Google's exit, Palantir assumed the contract in 2019. The XVIII Airborne Corps tested Maven in exercises like Scarlet Dragon, aiming to create an "AI-enabled corps." By 2024, the goal was 1,000 targeting decisions per hour—3.6 seconds each—compared to 2,000 personnel handling targeting in the 2003 Iraq invasion.

How Maven Works and Its Flaws

The Maven interface resembles corporate project management software combined with mapping applications, using a Kanban workflow system. It consolidates data from multiple systems, with machine-learning algorithms analysing imagery and sensor data to detect targets. Officers then select strike options from ranked recommendations. LLMs like Claude were added later for summarising intelligence reports but are not core to targeting.

This compression eliminated deliberation, as seen in historical precedents. In Vietnam, Operation Igloo White's sensor system could not distinguish trucks from ox carts, leading to inflated success claims. Similarly, in the 2003 Iraq invasion, Marc Garlasco's fast targeting cycle hit 50 buildings but missed all intended targets. The pattern repeats: systems optimise for speed without verifying accuracy, a phenomenon historian Michael Sherry called "technological fanaticism."

Pickt after-article banner — collaborative shopping lists app with family illustration

The Bureaucratic Double Bind

Organisations rely on unacknowledged judgment to interpret rules, but admitting this undermines their authority. Theodore Porter's concept of "trust in numbers" shows that quantitative rules are adopted for defensibility, not accuracy. Palantir CEO Alex Karp's vision of software-led decision-making, inspired by bee swarms, eliminates mediation but also removes the ability to question signals. This encodes bureaucracy into software, making it rigid and prone to failure when categories don't fit reality.

For the Shajareh Tayyebeh school, the target package presented a military facility, ignoring public information like business listings and Google Maps. At 1,000 decisions an hour, no one had time to search for discrepancies. A former senior official noted the building had been on a target list for years, yet the error was missed.

Conclusion: The Human Decisions Behind the Atrocity

Congress did not authorise the Iran war, yet American forces struck 6,000 targets in two weeks, including the school. Reporting framed it as an "AI error," but this distracts from human choices: compressing the kill chain, prioritising speed over deliberation, and building a system that produces 1,000 decisions hourly. Calling it an AI problem hides the accountability of those who started the war and refuse to stop it. The deaths of up to 180 children stem from bureaucratic failures, not technological malfunctions.