On Nov. 21, as news of Human Rights Watch’s (HRW) “Losing Humanity” report was spreading, the Department of Defense quietly released Directive 3000.09 “for the development and use of autonomous and semi-autonomous functions in weapon systems”, making the United States the first nation to have an official policy statement on autonomous weapon systems (AWS).
The DoD Directive appears at first glance a stony rejection of ICRAC and HRW’s call for a broad AWS ban, setting a presumption that the US will proceed to develop, deploy and use AWS, under certain doctrines and guidelines: “The Commanders of the Combatant Commands shall…Use autonomous and semi-autonomous weapon systems in accordance with this Directive….”
On closer examination, a more complicated picture emerges. The policy defines an AWS as “A weapon system that, once activated, can select and engage targets without further intervention by a human operator.” The ability of the system to select and engage targets autonomously makes it an AWS even if it is human-supervised with a possible human override; thus “human on the loop” is assigned the same status as “human out of the loop,” a conservative (good) policy.
The policy distinguishes AWS from “semi-autonomous weapon system” (SAWS) which it defines as “A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.” The category includes systems that automatically acquire, track, identify and prioritize potential targets or cue humans to their presence, “provided that human control is retained over the decision to select individual targets and specific target groups for engagement.” It also includes weapons with terminal homing guidance, and “fire and forget” weapons where the target has been human-selected.
The policy basically green-lights the development and use of both lethal and nonlethal SAWS for all targets.
Fully autonomous kinetic weapons, however, are only pre-authorized “for local defense” of manned installations and platforms, presumably referring to missile and projectile interception systems. And they have to be human-supervised.
Unsupervised AWS are only authorized for “non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets….”
This policy is basically, “Let the machines target other machines; Let men target men.” Since the most compelling arms-race pressures will arise from machine-vs.-machine confrontation, this solution is a thin blanket, but it suggests some level of sensitivity to the issue of robots targeting humans without being able to exercise “human judgment” — a phrase that appears repeatedly in the DoD Directive.
This approach seems calculated to preempt the main thrust of HRW’s report, that robots cannot satisfy the principles of distinction and proportionality as required by international humanitarian law, therefore AWS should never be allowed.
The policy directs that AWS and SAWS “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Assuming that directive has been followed, responsibility for IHL compliance will fall on those commanders and operators: “Persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”
The policy thus entrenches AWS policy in a strong defensive position against assault on the basis of IHL and the limits of computation.
However, development and fielding of fully autonomous lethal weapons which do engage human targets is not ruled out. It must be approved by three Undersecretaries of Defense and the Chairman of the Joint Chiefs, but a separate set of guidelines is provided for such systems, suggesting that such approval would not be extraordinary.
For the time being, however, DoD can deny that such programs have been approved and point to this policy to deflect questions about killer robots targeting people, and whether that comports with international law.
UPDATE: The analysis above failed to fully consider the implications of the Directive’s definition of “semi-autonomous weapon systems.” These include “lock-on-after-launch homing munitions” which are de facto fully autonomous in that, after launch, they identify and engage their targets without further human intervention. The policy does not exclude such weapons from engaging targets such as aircraft, ships, tanks, etc. that may contain human beings, neither does it exclude them from targeting free-standing personnel. A fuller discussion is available at: http://thebulletin.org/us-killer-robot-policy-full-speed-ahead.