As delivered by Prof. Peter Asaro
ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems. And (as an aside) we should not forget the scientifically recognized limitations of the technology or the foreseeable threats to global security such weapons pose.
We are also pleased with the statements and working papers beginning to examine the requirements for human control and planning in military systems. While this can be multifaceted, we must not let the complexity of military planning throw a smoke screen over the core issues of the meaningful human assessment of all targets, their legitimacy and the proportionate use of force.
We are glad that we see the beginnings of a more nuanced approach to the control of weapons systems that cannot be captured by gross terms such as in-the-loop, on-the-loop, the broader loop, human oversight, and appropriate levels of human judgement. However, these terms continue suomi porno to insinuate themselves in military, political and defence contractor’s narratives outside of the CCW. We welcome the suggestion of the IPRAW report to distinguish control-by-design and control-in-use—acknowledging that ultimate responsibility for the use of force lies in the specific context of its use.
As a scientific and scholarly group, our focus is on how we can make control effective and ensure that operators, commanders and planners are making clear judgements about the validity of every attack at the time of that attack.
To do this we need to move away from blanket terms and examine in detail how humans interact with automated machinery. As we have pointed out before, there has been more than 30 years of scientific research on human supervisory control of machinery and more than 100 years of research on the psychology of human reasoning. Ignoring the science for sake of expediency could lead us down a path to a humanitarian disaster.
The scientific approach is not mutually exclusive to an examination of the military control of weapons and the many lessons to be learned for current methods. Indeed, we applaud the UK’s paper on human control in 2018 and that of the Netherlands and others this year. We may not agree with all of the detail, but it is what we have urged all of the high contracting parties to bring to the table.
This combination of work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principle of humanity.
First, there should be a focus on what the human operator MUST do in the targeting cycle. This is control by use which is governed by targeting rules under International Humanitarian Law and International Human Rights Law, which were well articulated by the ICRC in their statement this morning. Further, international law rules that apply after the use of weapons – such as those that relate to human responsibility – must be satisfied.
Second, the design of weapon systems must render them INCAPABLE of operating without meaningful human control. This is control by design, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled in terms of the law, then such a weapon should be prohibited.
We need further discussion of the details of human-machine interfaces, the distribution of responsibility in the targeting cycle, and how their design can ensure IHL and IHRL compliance. Such details need not be the substance of a treaty, and we must resist being caught up in the weeds of process. We support German’s goal of finding a shared understanding of the principles of human control that apply to all weapons systems now and in the future, regardless of context, planning or process. This is not different from the normal processes that operate in science. One of the goals of science is to reduce the complexity of the world to simple theories or principles that capture all of the experimental data. In other words, we create abstractions of the details that are firmly coupled with and informed by the details. As Einstein once said, explanations should be a simple as possible but no simpler. “Human in the loop” and its variants fall under the too simple category. Detailed accounts of every weapon type and how it is controlled in every context is far too complex.
Let me give you an example of an abstraction with three conditions that could make a good starting point for discussions on the control of weapons systems. I have said this before but clearly there is no prohibition on repeating yourself in this room.
- a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.
- there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and
- there will be a means for the rapid suspension or abortion of every attack.
These are general principles that could provide a starting point for discussion by states in the context of negotiating a legally binding treaty that clearly articulates the legal obligations of human control.