Deadly School Strike in Iran Raises Critical Questions on AI and Outdated Intelligence
Iran School Strike: AI and Outdated Intelligence Under Scrutiny

Deadly Attack on Iranian Girls' School Kills 175, Exposing Critical Flaws in Military Intelligence

In the chaotic opening hours of the U.S.-Israel military campaign against Iran, a devastating missile strike hit the Shajarah Tayyiba elementary school for girls in Minab, southern Iran. The attack, which occurred on Saturday, February 28, 2026, as families were rushing to bring their children to safety, resulted in the deaths of at least 175 people, most of them young children, according to Iran's ambassador to the United Nations.

Outdated Intelligence and a Fatal Mistake

Satellite imagery reveals that the school was once part of an adjoining Iran Revolutionary Guard Corps military compound but had been separated by a wall between 2013 and 2016. A nearby clinic was similarly walled off between 2022 and 2024, and an outdoor play area was visible on Google Earth as early as 2017. Despite these clear civilian indicators, the school reportedly remained on a U.S. target list, mistakenly identified as a military site.

Retired Master Sgt. Wes J. Bryant, a former senior policy analyst at the Pentagon's Civilian Protection Center of Excellence, highlighted the catastrophic failure. "Potentially using targeting data that is a decade-plus old and not updating it and not going in and verifying what's happening on the ground right now — Is this still actually a military target? Are there civilians in it, even if it is? And how are we going to address that? — none of that happened," he stated.

The Role of AI in Military Targeting

The targets for Operation Epic Fury were identified with assistance from the National Geospatial-Intelligence Agency's Maven Smart System, a platform created by Palantir that integrates surveillance and intelligence data. This system has been coupled with Anthropic's Claude, a large language model designed to accelerate data processing. While the AI tool does not explicitly create targets, it works within Maven to identify potential points of interest for military intelligence.

Central Command's Adm. Brad Cooper defended the use of AI, stating on March 11, "These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."

However, experts have raised serious concerns. Seth Lazar of the Machine Intelligence and Normative Theory Lab at Australian National University warned, "The use of Claude to select military targets should send chills down the spine of anyone who's been spending the last few months vibe-coding, vibe-researching, vibe-engineering. You can't do test-driven development when the test is firing a precision-guided missile."

Systemic Failures and Gutted Safeguards

The tragedy occurred against a backdrop of systemic failures within the U.S. military's targeting processes. According to reports, U.S. Central Command created the strike coordinates using outdated information from the Department of Defense. The Defense Intelligence Agency maintains a database of potential targets, each assigned a "basic encyclopedia" number, but the overwhelming volume of data—with some intelligence dating back years—may have been too much to handle.

Compounding this, the Pentagon's Civilian Harm Mitigation and Response program, formalized in 2022 to reduce civilian casualties, has been largely gutted. Bryant revealed that the mission now exists mostly on paper, with Central Command's team cut from 10 personnel to just one. Without this critical oversight, the command essentially scrapped months of work that could have prevented the Minab tragedy.

Political and Legal Fallout

The Trump administration's rush to strike Iran has come under intense scrutiny, with key questions emerging about human accountability in an era of rapidly advancing military technology. Anthropic, the creator of Claude, has sued the Department of Defense after being labeled a "supply chain risk" and facing demands to replace its AI tools. The company noted in its lawsuit that the U.S. military "reportedly 'launched a major air attack in Iran with the help of [the] very same tools' that are 'made by' Anthropic."

During a Senate hearing on March 12, Air Force Gen. Alexus Gregory Grynkewich, commander of U.S. European Command, addressed the attack. "When tragedies like this happen, it causes us all to reflect and try to improve our processes," he said, emphasizing that there are "robust standards" in targeting, including regular image reviews to update intelligence. When asked by Democratic Sen. Kirsten Gillibrand how the U.S. "could have gotten this wrong," Grynkewich responded, "I would hesitate to speculate. There's usually a chain of errors and mistakes that happen. I would say we need to let the investigation play out and find all those factors."

The Department of Defense is expected to publish a full report from its investigation, but preliminary findings confirm U.S. responsibility for the strike. This incident has ignited a fierce debate about the ethical use of AI in warfare, the dangers of relying on outdated intelligence, and the urgent need for robust safeguards to prevent civilian harm in future conflicts.