Project Maven: The AI System at the Heart of US Military Warfare and Civilian Casualties
Project Maven, launched by the Pentagon in 2017, has become central to America's push to integrate artificial intelligence deeply into its military operations. An investigation by The Independent and conflict monitoring group Airwars has uncovered that this AI system played a role in airstrikes that resulted in civilian deaths, raising serious questions about accountability and the future of automated warfare.
The First Acknowledged Civilian Death from AI-Assisted Targeting
Abdul-Rahman al-Rawi, a 20-year-old student, has been identified as the first civilian killed in a series of airstrikes where the United States acknowledged using AI assistance for target identification. The strikes occurred in early February 2024 in Al-Qaim, Iraq. Weeks after these operations, a senior US official publicly boasted about employing AI to help identify targets, though US Central Command later claimed it "did not know" whether AI had been involved.
This incident highlights the growing use of AI in US military campaigns, with Project Maven serving as a cornerstone technology. The system is typically integrated into the broader Maven Smart System (MSS), an AI-enabled warfighting decision support platform developed by Palantir that uses Anthropic's Claude AI for analysis.
Expanding Use Amidst Intensified Bombing Campaigns
Recent deadly US attacks across Iran, which have killed hundreds in the past week, are reported to have utilized Palantir's Maven Smart System to identify targets. US military officials last week acknowledged American forces are likely responsible for a strike on a girls' school in Minab, Iran, where authorities say more than 165 people were killed, most of them students.
The bombing campaign has been so intense that Airwars analysis found in its first 100 hours, the US and Israel declared hitting more targets in Iran than in the first six months of the US-led Coalition's bombing campaign against ISIS.
"A state has responsibility to know if it has used AI on any of their strikes," said Jessica Dorsey, a professor of international law specializing in AI warfare at Utrecht University. "Commanders should have access to the intelligence their strikes are based on in order to directly interrogate the target to ensure positive identification."
What Exactly Is Project Maven?
Established by the Pentagon in 2017, the Algorithmic Warfare Cross Functional Team - better known as Project Maven - was adopted by the National Geospatial Agency (NGA). The system uses computer vision algorithms to locate and identify targets from satellite imagery, video, and radar data to detect movement and track potential threats.
Project Maven saw its first major deployment following Russia's invasion of Ukraine in 2022, with a basic version provided to Ukrainian forces to help identify Russian military vehicles, personnel, and structures. However, the system has delivered mixed results, with snow, dense foliage, and decoys known to hinder its capabilities. In desert terrain like western Iraq, where weather conditions can change landscapes abruptly, Maven's accuracy can drop to below 30 percent according to US officials who spoke with Bloomberg.
Despite these limitations, Maven is now available to all US services and combatant commands. Since the 2024 strikes, its user base has more than quadrupled, according to then-NGA Director Vice Admiral Frank Whitworth. The system is currently capable of making 1,000 targeting recommendations per hour, "choosing and dismissing targets on the battlefield," Whitworth explained.
"We want to use it for everything, not just targeting," Whitworth acknowledged, noting that the NGA now uses artificial intelligence so routinely that it has created a standardized disclosure for AI-generated intelligence products.
Growing Dissent and Corporate Controversies
As Project Maven's use expands, so does dissent against it. US Secretary of Defense Pete Hegseth declared in January that America would become an "AI-first" warfighting force across all domains, vowing to "unleash experimentation" and "eliminate bureaucratic barriers." Following the strikes on Iraq in 2024, US Central Command's chief technology officer Schuyler Moore told Bloomberg that the "benefit that you get from algorithms is speed."
However, with this speed has come growing concerns about human decision-makers doing little more than rubber-stamping recommendations made by AI. A group of experts warned in an April 2025 submission to the United Nations that current frameworks fail to address the "profound risks" that AI-assisted targeting like Project Maven poses to international humanitarian law and human judgment in targeting.
These concerns have been echoed by technology workers opposed to their companies' involvement in AI systems for warfare. Initially a key player in Project Maven, Google exited the project following protests and resignations from employees against the company's involvement in artificial intelligence for lethal purposes. Palantir stepped in to fill the void, referring to the project internally as 'Tron,' after the 1982 film in which a computer engineer is transported into a digital world.
Revelations that Claude AI was used in the US raid on Venezuela in January led to tensions between its maker, Anthropic, and the Department of Defense. Anthropic does not permit its AI systems to be deployed for mass domestic surveillance or fully autonomous weapons, and rejected pressure to back down. In a punitive move on March 5, the Pentagon designated Anthropic a "supply chain risk" with major consequences for the company.
"America's warfighters ... will never be held hostage by unelected tech executives and Silicon Valley ideology. We will decide, we will dominate and we will win," Pentagon press secretary Kingsley Wilson stated.
Expert Concerns: Algorithmic Bias and Human De-skilling
Speaking to The Independent, Professor Dorsey and Dr. Elke Schwarz, who specializes similarly at the London School of Economics, raised several critical concerns about AI-assisted targeting systems. Both were among the experts who warned about the risks of such technology last year, highlighting two central issues: algorithmic bias and human de-skilling.
"The criteria the US has used in the past is 'military age male.' You can't just go round killing military aged males," said Professor Dorsey. "And maybe in a computer vision algorithm, maybe they've programmed in something like carrying a weapon. But carrying a weapon is not something that should sentence you to death."
Dr. Schwarz emphasized the data quality problem: "If you don't have enough accurate, reliable or up-to-date data, your system is going to be vulnerable and flawed, and that in itself contains potential for harm. The big challenge, really, is that speed and scale are prioritized. Speed and scale are the paramount in these kinds of systems, and that accelerates the action chain. That's the allure, that's the seductive part about the system."
These concerns are not merely theoretical. Israel's offensive in Gaza included an AI-assisted target-creation platform called 'the Gospel' which produces potential targets so rapidly that some Israeli officers have compared it to a "mass assassination factory." Another Israeli AI-powered target identification tool, called Lavender, at one stage identified 37,000 potential targets based on their apparent links to Hamas and Islamic Jihad. One Israeli intelligence source told The Guardian the role of humans overseeing Lavender's target selection was minimal: "I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval."
Professor Dorsey also warned of the risk of "automation bias," in which humans begin trusting computer outputs without critically assessing targets themselves. As militaries rely increasingly on AI-assisted targeting, she argued personnel will begin offloading their own responsibilities to machines. "We're de-skilling ourselves. Commanders are getting less good at identifying what they are responsible to do on a battlefield."
"Humans have a tendency to not question decisions that are made by computational outputs," Dr. Schwarz added, underscoring the psychological dimension of human-machine interaction in lethal decision-making contexts.
