
Project Maven started in 2017 as a tool for scraping footage from drones
Devon Bistarkey, Defense Innovation Unit
Project Maven
Katrina Manson, WW Norton
Israel’s military is using artificial intelligence to identify targets in Gaza, the US is doing the same in Iran and Ukraine is pushing with smart drones. AI war is not the future of conflict, it is the present.
Unpacking global guidelines for the use of AI by the military—potential benefits, pitfalls, and unclear ethics—will fill books for decades to come. But that’s not what Katrina Manson intends to do Project Maven. Instead, she uses interviews with over 200 people to tell the story of the US military’s journey into AI warfare—or one of them, since there are 800 AI projects hidden in the Pentagon.
In 2017, Project Maven was launched to build a tool to scour footage from drones and extract useful intelligence – the drones had collected more data than any human could interpret. Maven had a rocky start, says Manson. The military deployed it with soldiers in Somalia eight months after the project’s launch, and algorithms told analysts that there were school buses in the clouds and trees were people.
We follow a project manager back to his days as an intelligence officer in Afghanistan trying to plan missions and direct troops with nothing more than a dusty laptop loaded with Microsoft Office: where is the enemy, where is safety, what does success look like?
People in war are inefficient, get tired, make mistakes. The fog of war could be cleared by AI, believed the usually secretive Project Maven builders who spoke to Manson. But they intended it to go much further: choose targets, hunt them down and kill them. Without slow, deliberate human decisions, killer robots can overwhelm enemies, quickly.
“We kill the wrong people all the time. A machine can’t be worse than a human,” says an insider. The team developed Maven into a variety of tools and tried to convince frontline people to adopt them. Results improved, but errors still occurred.
Since then, the US and other NATO members have deployed Maven in conflicts. About 32 companies are working on it, Manson writes, and 25,000 US military users log in regularly. But she also tells whether it is used at border crossings and in search of drug runners in the Caribbean. Can a state with such tools resist using them on its citizens?
The most worrying thing is that efforts are being made to cut people completely out of the loop, says Manson. So-called Goalkeeper flying drones and Whiplash naval drones can find their own targets and take them out. And humans have never invented a weapon and not used it.
It’s hard not to think of Stanislav Petrov, the Soviet lieutenant colonel who, in 1983, used his own judgment to decide that reports of a US missile launch were a false alarm and completely avoided nuclear war. Would AI call it?
For all the fascinating insights into Maven, the book tells us more about Pentagon bureaucracy and Silicon Valley’s willingness to take on any project—no matter how unsavory—if the money is right, than it does about AI. Manson’s access is phenomenal, but the nature of military secrecy means we likely won’t know exactly what technology the US government has produced, and how and when it’s being used, for years to come.
War has always been deeply uncomfortable, but modern conflicts where people watch someone thousands of kilometers away via drone and decide whether to warrant a lethal attack have made it impersonal. Leaving this to AI risks making war too easy to wage, and the consequences too easy to ignore.
“
Goalkeeper flying drones and Whiplash naval drones can find their own targets and take them out
“
We need to ensure that the power provided by AI weapons is treated with the gravity it deserves, but Manson tells us a chilling story that suggests the reality is different. One interviewee who hoped to join Project Maven reportedly told the panel that their motivation was to “reduce the non-American population”—and so he got the job.
Two more great reads on AI and warfare

The making of the atomic bomb by Richard Rhodes
There are many lessons here about where military AI can go. Like the Manhattan Project, it threatens to permanently raise global tensions and raise the stakes for war, just for starters.

Should we ban killer robots? by Deane Baker
This is a dive into the debate from an ethics professor, who looks at the difficult issues of trustworthiness, control and accountability when governments turn soldiers’ work over to computers.
Topics:






