AI, a Dead Student, and US Airstrikes: How a Civilian Became Caught in Modern Warfare
As debate intensifies over the role of artificial intelligence in military operations, particularly in strikes against Iran, scrutiny has shifted to civilians inadvertently caught in the crossfire. A joint investigation by The Independent and conflict monitoring group Airwars examines the death of Abdul-Rahman al-Rawi, a 20-year-old student killed in a US airstrike in Iraq in 2024. This incident represents the first known civilian fatality from an airstrike where the use of AI-assisted targeting was publicly acknowledged, raising profound ethical and operational questions.
The Fatal Strike in Al-Qaim
Abdul-Rahman al-Rawi, a construction diploma student, stepped out of his home in al-Qaim, northeast Iraq, after hearing a volley of noise overhead in the early hours of a February morning in 2024. Within seconds, he was dead, killed instantly by a US missile that destroyed a stationary car he was standing beside. His brother, Anmar, described the gruesome aftermath, stating it took two days to gather all of his brother's remains. Abdul-Rahman was one of up to three innocent bystanders potentially killed in one of 85 coordinated US attacks that night against Iraqi-government aligned forces and Iranian-backed militias in Iraq and Syria.
AI Targeting and the Unraveling Boast
The operation was initially deemed a success, with a senior US official boasting about using state-of-the-art AI technology for precision targeting. However, this claim quickly unraveled as civilian casualties emerged. While US officials acknowledged Abdul-Rahman's death as a mistake and sent a letter of condolence, the role of AI in the strike has come under intense scrutiny. When questioned, US Central Command (Centcom) stated it does not know whether the strike used AI-assisted targeting, a response that experts say raises significant red flags regarding accountability and record-keeping.
Expert Warnings and Legal Concerns
Jessica Dorsey, a professor of international law specializing in AI warfare at Utrecht University, highlighted the implications of Centcom's statement, noting it suggests a failure to maintain records of targeting assessments. Dr. Elke Schwarz of the London School of Economics warned that AI-assisted strike systems operate at such scale and speed that human judgment can fall behind, increasing the risk of errors. The strike on al-Qaim, a town near the Syrian border, was in response to an attack on a US base in Jordan that killed three troops. It reduced buildings to rubble, wounded up to 15 people, and allegedly killed medical personnel protected under international humanitarian law.
Project Maven and the Future of Warfare
At the heart of these concerns is Project Maven, a US military initiative launched in 2017 to integrate machine learning across operations. It uses computer vision algorithms to identify targets from satellite imagery and radar data. Schuyler Moore, then Centcom's chief technology officer, revealed in a Bloomberg interview that targets in the February 2024 strikes were identified with help from Project Maven, marking the first public declaration of AI use in individual strikes. However, officials have admitted that the system struggles with different terrains, with accuracy sometimes dropping below 30%, and risks like automation bias, where humans overly trust AI outputs without critical assessment.
Family Devastation and Lack of Accountability
Abdul-Rahman's death devastated his family. His father remains deeply depressed, and his mother suffered a heart attack and high blood pressure, breaking down whenever reminded of her son. The family received a brief letter of condolence from the US Air Force, but it declined explicit responsibility, leaving them angered and without compensation. Anmar expressed that no amount of hindsight will bring back his brother, underscoring the human cost of technological advancements in warfare.
Broader Implications and Ongoing Disputes
Since February 2024, AI-assisted targeting has reportedly been used widely in attacks across Iran, with rights groups claiming over 1,200 civilian deaths, including 194 children. The US military's Maven Smart System, paired with AI tools like Anthropic's Claude, has been deployed in these operations. This has sparked high-profile disputes, such as Anthropic's rejection of DoD demands for use in autonomous weapons and surveillance, and resignations at companies like OpenAI over ethical concerns. The case of Abdul-Rahman al-Rawi highlights critical issues about the degree of AI decision-making in strikes and the confidence it instills in operators, pointing to an unnerving future where human oversight may be compromised in the name of efficiency.
