ICRAC Statement to the 2017 UN CCW GGE Meeting
Delivered by Noel Sharkey, Chair, on 15 November 2017
I speak on behalf of the International Committee for Robot Arms Control, a founding member of the Campaign to Stop Killer Robots. We would like to thank Ambassador Gill for his preparations of this important meeting. And we also thank all of the States Parties for their lively participation and their interesting points of view.
ICRAC has many concerns about the use and development of autonomous weapons systems but in this statement we are going to concentrate on three points which have come up in the discussions here: these are the dual use of autonomous systems, where we are now with autonomous weapons systems development, and finally the issue of definitions and human control.
First the question of dual use: Will a prohibition on LAWS inhibit the development of autonomous systems innovation that have a practical purpose for good in society? The answer is clearly NO! Remember: We are not calling for a prohibition on the development or use of autonomous robots or autonomous functioning in the military or in the civilian sphere – except in one instance. We wish to only prohibit the development and use of autonomy in the critical functions of target selection and the application of violent force. Let us be totally clear here that restricting these critical functions in weapon systems will have absolutely no impact on civilian or even other military applications.
Second, it also became evident in the discussions over the last couple of days that some of the delegates believe that no one is as yet developing autonomous weapons systems, that they are a long way off and there are not even any working prototypes. The announcements from a number of companies in the hi-tech nations tell a very different story. In recent years we have heard about the development of fully autonomous fighter jets, tanks, submarines, naval ships, border protection systems and swarms of small drones. These have not been deployed as yet but that will not take long. For example, Kalashnikov have announced this year that it was developing a “fully automated combat module” based on neural networks that could allow a weapon to “identify targets and make decisions.” We cannot verify the truth of such claims but nonetheless it is clear that the underlying technology that will enable self-targeting is here and could be deployed soon.
Finally, we at ICRAC are very concerned that we are already beginning to see the emergence of an arms race towards an ever increasing level of autonomy in weapon systems. There are often token efforts to say that there is a human somewhere in the control loop or on the control loop exercising some form of human judgement or planning. This human control of weapons systems is the key component of what we should be focussing on in these discussions – not artificial intelligence, not different levels of autonomy for vehicles or semi- autonomous function. It is sufficient to define autonomous weapons systems in a simple way such as “weapons systems that once launched can select targets and apply violent force without meaningful human control” – or something similar. It would be MOST valuable here to debate what kinds of human control do states find acceptable. There is 30 years of research on human supervisory control of computing machinery and we have never heard it mentioned here. So let us come down on a simple definition of autonomous weapons systems without delay and get down to the really important question of what is an acceptable level of human control. Why are we not discussing this here to find out what state experts think is acceptable and what is acceptable ethically and under IHL and IHRL?
ICRAC would like to recommend that States Parties schedule at least four weeks of time for talks in 2018 to discuss the human control of weapons systems and to start a process toward negotiating a legally binding instrument that ensures meaningful human control over weapon systems.