On May 15, ICRAC’s David Akerson delivered the following statement, ICRAC’s second statement on legal issues, to the informal “Meeting of Experts“, gathered to discuss questions related to “lethal autonomous weapons systems” from May 13 to May 16 at the United Nations in Geneva, Switzerland.
Legal statement by the International Committee for Robot Arms Control
Convention on Conventional Weapons Meeting of Experts on lethal autonomous weapons systems
United Nations Geneva
15 May 2014
Thank you Madam Chairperson,
ICRAC has three brief comments:
FIRST, In his presentation, Mr Melzer argued that the introduction of autonomous weapons would not change the existing rules for the use of force or for the entry into force. We agree with this, and consider it incontestable. To our knowledge, the argument has never been put otherwise. No critics of autonomous weapons have argued that the weapons would change the law, the jus in bello or jus ad bellum, which is fundamental and long-standing. Rather, critics have argued that autonomous weapons — because of the potential ease of their use, because of the lack of soldier risk and home population discontent, because of ease of covert use — could have the operational effect of lowering the barriers to the use of force. We would like to hear from the speakers their views about that concern.
SECOND, Professor Marauhn urges us to focus on the release of a weapon as the moment of decision for which legal accountability could be established. While we agree with this point, we are here to discuss autonomous weapon systems. This implies that the final decision by a human is not in fact the final decision that will be made. In effect, decision making authority is being delegated to a machine. While it is argued that the machine is operating under the control of a program written by humans, since it is an autonomous system and the program is part of that system, in fact the system is operating under its own control. The complexity of autonomous systems means that as a practical matter, humans cannot anticipate the conditions which the system will encounter nor predict its behavior even assuming that it functions according to program and does not malfunction. Also, it is by no means clear that machines will be programmed only by human beings, since the progress of software engineering increasingly automates the process of coding. Does this not imply that the use of autonomous weapon systems implies a progressive release of control and responsibility from humans to machines?
THIRD, Professor Heyns raised the issue of the lack of transparency around weapons development and use, and encouraged states to be more open about these issues. We agree with this. We would appreciate if speakers on the panel could address the existing adequacy of transparency regimes, and any suggestions for states going forward.