Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems (LAWS)
Geneva, 13-17 November 2017
ICRAC is an international not-for-profit association of scientists, technologists, lawyers and policy experts committed to the peaceful use of robotics and the regulation of robot weapons. Please visit our website www.icrac.net and follow us on Twitter @icracnet
ICRAC is a founding member of the Campaign to Stop Killer Robots www.stopkillerrobots.org
What is Artificial Intelligence (AI)?
The term AI tends to evoke science-fiction tropes and even notions of “super intelligence”. But in reality, AI is just an umbrella term given to computational techniques that automate tasks that we would normally consider to require human intelligence. This does not mean that these software programs themselves are intelligent.
How fast is AI progressing?
Enthusiasm about the progress of AI has increased considerably in the last couple of years even while techniques have not improved much since the 1980s. This is largely because of two factors
(i) the acquisition of big data sets with billions of examples;
(ii) plummeting costs for massive processing power.
Both factors provide an ideal environment for a cluster of computational techniques called Machine Learning (ML). The exploitation of ML has led to the mass commercialization of AI over a wide range of applications by various companies. So current AI progress is best described as spreading sideways rather than moving upwards.
Do civilian and military applications of AI differ?
Yes. What makes any autonomous system relying on AI computational techniques work is brittle software based on algorithms and statistics. Thanks to the availability of large amounts of training data, we will hopefully soon be able to make these techniques work in applications such as self-driving cars, to name a prominent example from the civilian sector. But this does not translate into military applications. Aside from the fact that cars and weapon systems are designed for completely different purposes, the comparably structured and regulated environment of road traffic does not compare at all to the adversarial, chaotic environment of the battlefield. The fog of war will only allow for faulty or, at best, noisy data. So beware of false equivalences!
Would LAWS be “precision weapons”?
Possibly (yet illegal to use as well). LAWS could take various forms. For instance, a swarm of hobby drones fitted with a heat sensor and a small explosive payload could be programmed to attack everything that emits body temperature. Such a three-dimensional moving minefield of LAWS would be the opposite of a precision weapon.
But let’s assume, for the sake of the argument, LAWS designed with military-grade accuracy in mind. Fitted with better sensing and data processing hard- and software as well as payloads tailored to the system’s mission, those could be more precise than current weapon systems. But the technical potential for accuracy and the application of violent force to a legitimate target are two separate issues. Even the most high-tech precision weapon system has to be used in a manner that is legal under International Humanitarian Law (IHL).
IHL dictates that, when using a weapon system, constant care should be taken to avoid or minimize civilian casualties (principles of distinction and precautions in attack). It also prohibits to launch or continue an attack, when the expected civilian losses exceed the military advantage sought for (principle of proportionality). These concepts enshrined in IHL are only meaningful in the context of human judgment. Machines are a far cry from the reasoning that a human military commander acting responsibly and in compliance with the law would engage in. Machines will for the foreseeable future not be able to discriminate combatants from civilians, let alone judge which use of force or type of munition is proportionate in light of the military objective. Hence we cannot and must not expect modern weapon systems to free us from these legal obligations. On the contrary, we have to heed these principles in equivalence with our growing technological capabilities.
For example, before launching an attack, and throughout its execution, IHL requires military commanders to take all feasible precautions to spare the civilian population, by making use of all the information from all sources available to them. An autonomous weapon system fitted with various sensors for targeting purposes would thus require a commander to make use of the data that is gathered and the additional information that is generated whilst using the system. A commander cannot choose to treat this new “smart” precision weapon akin to the “dumb” weapons of the past, that is, as if this information were not being made available by the system or as if it could be ignored. Instead, weapon technology and legal obligations go hand in hand. Consequently, the more sophisticated our weapon systems become, the more meaningful human control becomes feasible regarding the critical functions of identifying (“fixing”), selecting and engaging targets. And hence the more care for ensuring meaningful human control is required.
This is not a particularly new insight of course, it is why advanced laser guided munitions are used with tactics, techniques and procedures that differ from those of simple free-falling bombs. So, in sum, fully autonomous weapon systems (=LAWS), that is, systems designed in a way that would require commanders to abdicate meaningful human control, are simply incompatible with the way IHL demands weapons to be used by human military commanders on the battlefield.
Would LAWS make war more humane?
No. It is sometimes argued that autonomy in weapons systems could make wars more humane by ensuring greater precision in targeting military objectives and by clearing the battlefield from human passions, such as anger, fear and vengefulness. Even assuming – but not conceding (see above: Would LAWS be “precision weapons”?) – that one day LAWS might somehow reach human or even “higher-than-human” performances with respect to adherence to IHL, this would not “humanize” future armed conflicts for at least three reasons
(i) delegating the power to take life-or-death decisions to machines blatantly denies the human dignity of the recipients of lethal force and their intrinsic worth as human beings;
(ii) LAWS trivialize the decision to take someone else’s life by relieving war-fighters from the moral burden inevitably associated with it;
(iii) while it is true that machines’ decision-making will never be influenced by negative human emotions, it is equally true that LAWS are also immune to compassion and empathy, which in certain situations could compel a human to refrain from using lethal force even when she or he would legally be entitled to do so.
Would LAWS proliferate?
Yes. LAWS need not necessarily take the shape of one specific weapon system akin to, for instance, a drone. LAWS also do not require a very specific military technology development path, the way nuclear weapons do, for example. As AI software and robotic hardware mature and continue to pervade the civilian sphere, militaries will feel prompted to increasingly adopt them (however, see above: Do civilian and military applications of AI differ?) in continuation of a dual-use-trend that is already observable in, for instance, armed drones.
Research and development for LAWS-related technology is thus already well underway and distributed over countless university laboratories and commercial enterprises, making use of economies of scale and the forces of the free market to spur competition, lower prices and shorten innovation cycles. This renders the military research and development effort in the case of LAWS different from those of past hi-tech conventional weapon systems. So (without even taking exports into account) it is easy to see that LAWS would be comparably easy to obtain (as well as reverse-engineer) and thus prone to quickly proliferate to a wide range of state and non-state actors.
Would LAWS threaten global stability?
Yes. LAWS promise a military advantage inter alia because they are expected to perform certain tasks much faster than a human could do. We argued above that IHL does not allow for relinquishing meaningful human control. In addition, there are considerations from a strategic perspective that also suggest restraining ourselves and keeping meaningful human control intact. Without meaningful human control, the actions and reactions of individual LAWS as well as swarms of LAWS would have to be controlled by software alone.
Consider the example of adversarial swarms deployed in close proximity to each other. Their respective control software would have to react to signs of an attack within a very short, split-second timeframe – by evading or, possibly, counter-attacking in a use-them-or-lose-them situation. Indications of an attack – sun glint interpreted as a rocket flame, sudden and unexpected moves of the adversary, or just some malfunction – could trigger escalation. It is within the nature of military conflict that these kinds of interactions between two adversarial systems or swarms would obviously not be tested or trained beforehand. In addition, it is, technically speaking, impossible to fathom all possible outcomes in advance. In other words, the interaction of LAWS, if handed over full autonomy, would be unpredictable and take place at operational speeds far beyond human fail-safe capabilities.
Comparable runaway interactions between algorithms are already observable in financial markets. Hence it is a real possibility that LAWS interactions could result in an unwanted escalation from crisis to war, or, within armed conflict, to unintended higher levels of violence. This means an increase in global instability and is unpleasantly reminiscent of Cold War scenarios of “accidental war”.
Would banning LAWS stifle technology?
No. On the contrary. Global Governance for LAWS would not mean a prohibition or control of specific technologies as such. The wide spread and the dual-use potential of AI software and robotics suggest that this would not only be a completely futile, luddite endeavor. It would also be severely misguided in light of the various benefits potentially flowing from the maturation of these technologies with regard to civilian applications.
What is more, a number of recent developments in fact suggest that technology companies would welcome a ban on LAWS since they do not want their products to be associated with “Killer Robots”. Google, for instance, stated already years ago that it is not interested in military robotics. The Canadian robot manufacturer Clearpath Robotics even officially joined forces with the Campaign to Stop Killer Robots in 2014 and “ask[s] everyone to consider the many ways in which this technology would change the face of war for the worse” and create robotic products solely “for the betterment of humankind” instead. And in 2017, 160 high profile CEOs of companies developing artificial intelligence technologies signed an open letter calling for the CCW to act.
So preventive arms control for LAWS would not mean the regulation or prohibition of specific technologies. Instead, it would give tech entrepreneurs and manufacturers guidance and assurance that their inventions and products cannot be misused. Hence arms control for LAWS is not about listing or counting (stockpiles of) individual weapon systems. Rather, it is about drawing a line regarding the use of autonomy in weapon systems, a line to retain meaningful human control and prohibit the application of autonomy in specific (especially the “critical”) functions of weapon systems.
The CCW has drawn a comparable line and established a strong norm like that before, with the preventive prohibition of laser blinding weapons in 1995. This prohibition protects a soldier’s eyes on the battlefield; it is, obviously, not a blanket ban on laser technology in all its other uses, be they military or, especially, civilian in nature. In other words, just as we got to keep our CD players and laser pointers back then, we will get to keep our smartphones and self-driving cars this time.
Further reading:
Altmann, Jürgen/Sauer, Frank (2017): Autonomous Weapon Systems and Strategic Stability, in: Survival 59: 5, 117–142.
Amoroso, Daniele/Tamburrini, Guglielmo (2017): The Ethical and Legal Case Against Autonomy in Weapons Systems, in: Global Jurist. Online first.
Asaro, Peter (2012): On Banning Autonomous Weapon Systems. Human Rights, Automation, and the Dehumanization of Lethal Decision-Making, in: International Review of the Red Cross 94: 886, 687–709.
Garcia, Denise (2016): Future Arms, Technologies, and International Law: Preventive Security Governance, in: European Journal of International Security 1: 1, 94-111.
Sauer, Frank (2016): Stopping “Killer Robots”. Why Now Is the Time to Ban Autonomous Weapons Systems, in: Arms Control Today 46: 8, 8–13.
Sharkey, Noel (2012): The Evitability of Autonomous Robot Warfare, in: International Review of the Red Cross 94: 886, 787–799.