Despite the challenges of COVID-19, 21-25 September the world’s governments will discuss at the UN in Geneva the humanitarian and security threat posed by “lethal autonomous weapons systems” – high-tech killer robots that could target people without meaningful human control over the use of violence.
But despite the media clichés that often accompany these discussions – journalists seem unable to resist references to Terminator and Skynet – killer robots are not some sci-fi fantasy Splitgate Arena. They are the result of a long trend in weapons development of using technology to disembody killing.
In my new book, Political Minefields (I.B. Tauris), I trace the history of efforts to avoid responsibility for violence by using remote and automated weapons like landmines, cluster munitions, armed drones and, now killer robots.
“Weapons developers are seeking to pervert the power of information and communications technology for deadly ends, taking humans entirely out of the decision to kill,” writes Jody Williams, who was awarded the 1997 Nobel Peace Prize along with the International Campaign to Ban Landmines (ICBL), in her foreword to my book. “In the autonomous, weaponized robot, they are essentially designing mines that actively seek out their targets, that can follow you, that can fly.”
From the WWII minefields of North Africa and US automated bombing of Laos to armed drones targeting makers of improvised explosive devices, military planners have tried, as US Vietnam War General William Westmoreland put it, to “replace wherever possible the man with the machine.”
But in my research, I’ve learned that the fever dream of algorithmic warfare never delivers on its promise of victory by remote control. People are too messy, unpredictable, clever, and tricky to be meet the assumptions programmed into military technology.
Allied troops just marched through the Nazi minefields at El Alamein, taking the casualties and repurposing the mines they found for their own uses. Vietnamese communist soldiers spoofed the various electronic detectors dropped from US warplanes onto their pathways through the jungle. They sent animals down the trail, placed bags of urine next to so-called “people sniffers”, and played tapes of vehicle noises next to microphones – prompting computerized bombers to unload explosives onto phantom guerillas.
As I have travelled in and around the world’s minefields and cluster munition strike zones, I have heard the story over and over again, in Afghanistan, Bosnia, Cambodia, Iraq, Laos and South Sudan. In each place, it is civilians who have borne the consequences of turning warfare over to automated devices. Decades after the soldier who placed and armed them, landmines in Afghanistan continue to maim children. Lao farmers still risk setting off unexploded cluster bomblets when they plough their rice fields.
The final chapter of my book highlights the far-sighted work of the International Committee for Robot Arms Control (ICRAC) and Campaign to Stop Killer Robots, which have sounded the alarm on the emerging militarization of artificial intelligence and robotics. They are not anti-technology Luddites. “It’s OK for a plane to fly itself,” Dr. Peter Asaro, co-founder of ICRAC and New School professor told me. “It’s not OK for a plane to decide who to shoot at.” We have, he says, “the right not to be killed by a machine.”
For the last six years, governments have gathered in Geneva to consider how to address lethal autonomous weapons systems, under the auspices of the Convention on Certain Conventional Weapons (CCW). Despite pressure from a wide range of countries and civil society, the big military powers have dragged their feet, abusing the rule of consensus decisionmaking, running down the clock to avoid constraining their high-tech weapons R&D.
But diplomatic patience of the majority of nations in the CCW is running out. A global poll found that 61% of people in 26 countries opposed killer robots. And UN Secretary-General António Guterres has directly called on governments “to ban these weapons, which are politically unacceptable and morally repugnant.”
Just as human reality is more complex than software programs, society is not the passive recipient of technological change. In writing my book, I have been inspired to meet people who have mobilized to clear up the minefields, provide support to survivors of cluster munitions and pressured diplomats to negotiate treaties banning inhumane weapons. We must following their example to demand new international law ensuring that the use of force remains under meaningful human control.
Matthew Bolton, ICRAC member and associate professor of political science at Pace University, is author of Political Minefields: The Struggle against Automated Killing. If you can’t find a copy in your local bookstore, it is available with a 35% discount from Bloomsbury with the code GLR TW5