AI Warfare's Hidden Cost: Defence Contractors and the Fog of Chosen Blindness
AI Warfare: Defence Contractors and the Fog of Blindness

The Dawn of AI Warfare: Precision Weapons and Chosen Blindness

In conflicts from Gaza to Iran, a disturbing pattern has emerged: precision weapons, chosen blindness, and dead children. The cost of failing to regulate artificial intelligence in warfare is already devastatingly high. This is not merely about advanced technology; it is about a systemic shift where AI firms are effectively operating as defence contractors, hiding behind their models to avoid responsibility.

The Fog Procedure: From Human Soldiers to Algorithmic Systems

An Israeli military strategy known as the "fog procedure" illustrates this logic. First used during the second intifada, it involves soldiers in low visibility conditions firing bursts into the darkness, assuming an invisible threat. This violence, licensed by blindness, has been refined and systematised with AI warfare. In Gaza, described as the first major "AI war", systems processed billions of data points to rank individuals as probable combatants, generating target lists with chilling efficiency.

The darkness in the algorithm is a deliberate design choice, not a terrain condition. This chosen blindness creates deniability, making violence seem inevitable and shifting decision-making from people to procedures. For instance, in Minab, southern Iran, a US strike hit the Shajareh Tayyebeh elementary school, killing at least 168 people, mostly children. The weapons were precise, but the intelligence was a decade out of date—the school had been repurposed from a military base. Whether AI directly selected the target or not, the system built on algorithmic targeting enabled such errors at an unprecedented scale.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Inheriting and Automating a Deadly Logic

AI targeting systems did not invent this logic; they inherited it from human practices. In Gaza, a 2014 strike killed four boys on a beach, logged as a targeting error because aerial views made identification difficult. A classified Israeli military database suggests that in Gaza, named militants accounted for roughly 17% of over 53,000 deaths, implying 83% were civilians. This is not precision warfare but imprecision as an aim, now automated by AI.

These systems inherently defy international humanitarian law, which requires careful verification and protection of civilians. In Gaza, an algorithm inferred combatant identities statistically across the entire population, with human operators reviewing names for about 20 seconds each—long enough only to confirm gender. One system produced over 37,000 targets in weeks, reducing humans to queue managers rather than decision-makers.

The Corporate Players: From Silicon Valley to the Battlefield

The companies involved are not obscure startups but major tech firms integrated into military targeting. Palantir, with early CIA funding, supplied AI systems for the Iran campaign, drawing on large language models like Anthropic's Claude. When Anthropic resisted ethical constraints, the Pentagon turned to OpenAI, which removed its military use ban in 2024. Google and Amazon signed Project Nimbus, a $1bn+ contract with Israel, while Microsoft had deep integration before partially withdrawing. Anduril builds autonomous weapons, and venture capital firms like Andreessen Horowitz lobby heavily.

These firms blur lines between commercial and defence products, evading regulations that apply to traditional defence contractors like Raytheon. Palantir spent millions lobbying Washington, outspending Northrop Grumman in one quarter. The EU AI Act exempts military applications, relying on international law—a framework these systems systematically destroy by obscuring accountability chains.

Accountability Erosion and Regulatory Failures

International law demands identifiable decision-makers and reconstructible reasoning, but AI targeting dissolves attribution across engineers, commanders, and corporations. Probability scores replace auditable logic, and 20-second approvals replace thorough verification. Companies like Palantir operate outside the Geneva Conventions, as they are not state signatories.

Pickt after-article banner — collaborative shopping lists app with family illustration

Regulatory efforts have been inadequate. The US 2025 National Defense Authorization Act promotes more AI adoption, not restraint. Only pressure points like EU export controls or ICJ advisory opinions on Palestinian rights offer hope for liability. Regulation must require explainable AI systems, cumulative civilian cost assessments, and supply-chain liability.

Conclusion: A Call for Action Before the Next Tragedy

The fog procedure now defines modern warfare, but unlike soldiers in the darkness, companies like Palantir operate from Palo Alto with no personal risk. To prevent future Minabs, governments must regulate AI as defence technology, not consumer tech. The cost of inaction is already too high, written in rows of small coffins. It is time to lift the fog and hold these hidden contractors accountable.