Declared September, 2009 at Sheffield, UK, by ICRAC founding members Juergen Altmann, Peter Asaro, Noel Sharkey and Rob Sparrow.
Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.
We propose that this discussion should consider the following:
- Their potential to lower the threshold of armed conflict;
- The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
- Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
- A ban on arming unmanned systems with nuclear weapons;
- The prohibition of the development, deployment and use of robot space weapons.
Endorsed by all ICRAC members.
Endorsed October, 2010 in Berlin, Germany, by a majority vote of participants at the 2010 Experts’ Workshop. Not endorsed by all workshop participants; disagreements existed amongst the participants about both the details of the text and the underlying questions of ethics, policy, and political strategy. Individual endorsements of the text are listed at the end of this document. Some participants were unable to vote on or endorse the document as a result of their institutional affiliations.
Given the rapid pace of development of armed tele-operated and autonomous robotic systems, we call upon the international community to commence a discussion about the pressing dangers that these systems pose to peace and international security and to civilians, who continue to suffer most in armed conflict. Armed tele-operated and autonomous systems have the potential to accelerate the pace and tempo of warfare, to undermine existing arms controls and regulations, to exacerbate the dangers of asymmetric warfare, and to destabilize regional and global security. In particular, autonomous systems may further the indiscriminate and disproportionate use of force and obscure the moral and legal responsibility for war crimes.
– That the long-term risks posed by the proliferation and further development of these weapon systems outweigh whatever short-term benefits they may appear to have.
– That it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war.* In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision and its foreseeable consequences.
– That the currently accelerating pace and tempo of warfare is further escalated by these systems and undermines the capacity of human beings to make responsible decisions during military operations.
– That the asymmetry of forces that these systems make possible encourages states, and non-state actors, to pursue forms of warfare that reduce the security of citizens of possessing states.
– That the fact that a vehicle is uninhabited does not confer a right to violate the sovereignty of states.
There is, therefore, an urgent need to bring into existence an arms control regime to regulate the development, acquisition, deployment, and use of armed tele-operated and autonomous robotic weapons.
We hold that this regime should prohibit:
– Further development, acquisition, deployment, and use of armed autonomous robot weapons.
– Arming new kinds of autonomous or tele-operated systems with nuclear weapons.
– The development, deployment, and use of robotic space weapons.
We hold that this regime should restrict:
– The range and payload of armed tele-operated uninhabited vehicles.
– The number, by class and capability, of armed tele-operated uninhabited systems fielded by any state.
– The endurance of these systems.
– The development, acquisition, and deployment of weaponised uninhabited systems below a minimum size.
* The decisions to which this principle should be applied include:
– The decision to kill or use lethal force against a human being.
– The decision to use injurious or incapacitating force against a human being.
– The decision to initiate combat or violent engagement between military units.
– The decision to initiate war or warfare between states or against non-state actors.
It is understood that in the application of this principle, rigorous and specific definitions will need to be negotiated as terms of a global convention, and that certain exceptions may be made where the use of automation in weapons and security systems has long been customary, or where a compelling case may be made for the necessity of automation in order to protect human life from immediate threats.
However, the world community should categorically reject the claim that military necessity will require robots to be capable of autonomous decision-making in the use of violent force in order to defend themselves and to ensure their prevailing over opponents.
To the extent that this principle may, in the longer term, limit the effectiveness of armed robots in combat, particularly against each other, it is to be welcomed, as a concrete, equitable, verifiable and enforceable arms control measure, which may help to prevent the nightmare, so often foretold, of the loss of human control over the maintenance of security, the use of lethal force and the conduct of war, and of its surrender to an armed, autonomous technology.
Endorsed by the following workshop participants:
Colin Allen, Indiana University, Bloomington IN, USA
Jürgen Altmann, Technische Universität Dortmund, Dortmund, Germany
Peter Asaro, New School University, New York NY, USA
Hans-Dieter Burkhard, Humboldt Universität, Berlin, Germany
Marcel Dickow, Stiftung Wissenschaft und Politik, Berlin, Germany
Mark Avrum Gubrud, University of Maryland, College Park MD, USA
Philipp von dem Knesebeck, Humboldt-Universität, Berlin, Germany
Armin Krishnan, University of Texas at El Paso, El Paso, TX, USA
Hans-Arthur Marsiske, Writer & Journalist, Hamburg, Germany
Otfried Nassauer, Berlin Information-center for Transatlantic Security – BITS, Germany
Christoph Reißfelder, Technische Universität Darmstadt, Darmstadt, Germany
Samantha Rennie, Independent Consultant to NGOs and Foundations, London, UK
Lambèr Royakkers, Eindhoven University of Technology, The Netherlands
Frank Sauer, Bundeswehr University Munich, Munich, Germany
Niklas Schörnig, Peace Research Institute Frankfurt, Frankfurt/M., Germany
Noel Sharkey, University of Sheffield, Sheffield, UK
Rob Sparrow, Monash University, Melbourne, Australia
Philipp Stroh, Justus-Liebig-Universität Gießen, Germany
Lucy Suchman, Lancaster University, Lancaster, UK
Wendell Wallach, Yale University Interdisciplinary Center for Bioethics, New Haven CT, USA
Steve Wright, Leeds Metropolitan University, Leeds, UK