ICRAC statement on legal issues to the 2014 UN CCW Expert Meeting

On May 15, ICRAC’s Heather Roff delivered the following statement on legal issues to the informal “Meeting of Experts“, gathered to discuss questions related to “lethal autonomous weapons systems” from May 13 to May 16 at the United Nations in Geneva, Switzerland.

Legal statement by the International Committee for Robot Arms Control

Convention on Conventional Weapons Meeting of Experts on lethal autonomous weapons systems
United Nations Geneva
15 May 2014

Thank you Madam Chairperson,

I am making this statement on behalf of the ICRAC. We have [3] issues to raise about the law and autonomous weapons:

FIRST, Although the experts who spoke yesterday all said that IHL could govern autonomy, there are many legal experts who have raised concerns, including about compliance with IHL and about accountability. Such concerns have been raised at many international meetings and consultations on this issue. We hope states take their concerns into account in future discussions.

SECOND, I’d like to challenge the justification for the fielding, use or production of lethal autonomous weapons, particularly the reliance on an argument about removing “bad emotions”. In particular how a lethal autonomous weapons system will be technologically superior and inherently “rational”. For if we rely on this antecedent justification, and the machine upholds the static definition of international law presented, that is, the machine adheres to IHL and makes a “reasonable judgment in the circumstances”. Then one is stating that the killing is either reasonable or an “accident” and outside of the purview of moral judgment or criminal liability.

Furthermore, the suggestion that such reasonable machines will not rape, or that rape cannot be carried out by a robot, as an additional justification for their creation is politically myopic and insensitive. We should remember that rape in war is not only the act of individuals, but has often been an instrument of state policy.

THIRD, following on some of France’s questions on the issue of Command Responsibility, I would like to hear the panel’s viewpoint on how current IHL might hold a commander accountable for actions that are beyond her control. In particular,

Under IHL, a commander is criminally liable for the acts of subordinates if:
1) (s)he is in superior-subordinate relationship with those direct actors,
as defined as effective control, as measured by the ability to prevent or punish the subordinate.
2) Commander has – or should have – knowledge of crimes committed by subordinates
3) Commander fails to prevent crimes (or punish if he learns after the fact)
Note: this creates liability for acts of subordinates, it does NOT connect a commander to weapons directly. Moreover, with the learning architecture in learning systems, such effective control is IMPOSSIBLE, and this account begs the question about what it means to punish the subordinate “weapons system”.

For we must recall that the behavior of autonomous systems is, by definition, stochastic or probabilistic in nature and is thus unpredictable, criterion of effective control is impossible.

Thank you.


  1. Taking a Stand Against Gender Discrimination in Disarmament Policymaking | Political Minefields - May 19, 2014

    […] However, as Charli Carpenter and Sarah Knuckey pointed out in their excellent blog posts, the meeting was discouraging for its blasé and retrogressive gender dynamics. None of the 18 “expert” panelists called upon to testify before the states party were women, despite no shortage of women ably qualified to do so. Most of them were also from North America or Western Europe. Women who were experts were literally condemned to the margins — only allowed to speak in civil society statements from the back of the room or ‘Side Events.’ There were more subtle discourses too, with boosters of killer robots depicting the civil society campaign as hysterical, or claiming that robotic weapons would avoid soldiers’ “emotional responses” to war (which are supposedly a bad thing) and would be more “rational.” Read Heather Roff’s challenge to this discourse here. […]