DoD Directive on Autonomy in Weapon Systems

This is a guest post by Mark Gubrud (@mgubrud).

On Nov. 21, as news of Human Rights Watch’s (HRW) “Losing Humanity” report was spreading, the Department of Defense quietly released Directive 3000.09 “for the development and use of autonomous and semi-autonomous functions in weapon systems”, making the United States the first nation to have an official policy statement on autonomous weapon systems (AWS).

The DoD Directive appears at first glance a stony rejection of ICRAC and HRW’s call for a broad AWS ban, setting a presumption that the US will proceed to develop, deploy and use AWS, under certain doctrines and guidelines: “The Commanders of the Combatant Commands shall…Use autonomous and semi-autonomous weapon systems in accordance with this Directive….”

On closer examination, a more complicated picture emerges. The policy defines an AWS as “A weapon system that, once activated, can select and engage targets without further intervention by a human operator.” The ability of the system to select and engage targets autonomously makes it an AWS even if it is human-supervised with a possible human override; thus “human on the loop” is assigned the same status as “human out of the loop,” a conservative (good) policy.

The policy distinguishes AWS from “semi-autonomous weapon system” (SAWS) which it defines as “A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.” The category includes systems that automatically acquire, track, identify and prioritize potential targets or cue humans to their presence, “provided that human control is retained over the decision to select individual targets and specific target groups for engagement.” It also includes weapons with terminal homing guidance, and “fire and forget” weapons where the target has been human-selected.

The policy basically green-lights the development and use of both lethal and nonlethal SAWS for all targets.

Fully autonomous kinetic weapons, however, are only pre-authorized “for local defense” of manned installations and platforms, presumably referring to missile and projectile interception systems. And they have to be human-supervised.

Unsupervised AWS are only authorized for “non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets….”

This policy is basically, “Let the machines target other machines; Let men target men.” Since the most compelling arms-race pressures will arise from machine-vs.-machine confrontation, this solution is a thin blanket, but it suggests some level of sensitivity to the issue of robots targeting humans without being able to exercise “human judgment” — a phrase that appears repeatedly in the DoD Directive.

This approach seems calculated to preempt the main thrust of HRW’s report, that robots cannot satisfy the principles of distinction and proportionality as required by international humanitarian law, therefore AWS should never be allowed.

The policy directs that AWS and SAWS “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Assuming that directive has been followed, responsibility for IHL compliance will fall on those commanders and operators: “Persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”

The policy thus entrenches AWS policy in a strong defensive position against assault on the basis of IHL and the limits of computation.

However, development and fielding of fully autonomous lethal weapons which do engage human targets is not ruled out. It must be approved by three Undersecretaries of Defense and the Chairman of the Joint Chiefs, but a separate set of guidelines is provided for such systems, suggesting that such approval would not be extraordinary.

For the time being, however, DoD can deny that such programs have been approved and point to this policy to deflect questions about killer robots targeting people, and whether that comports with international law.

UPDATE: The analysis above failed to fully consider the implications of the Directive’s definition of “semi-autonomous weapon systems.” These include “lock-on-after-launch homing munitions” which are de facto fully autonomous in that, after launch, they identify and engage their targets without further human intervention. The policy does not exclude such weapons from engaging targets such as aircraft, ships, tanks, etc. that may contain human beings, neither does it exclude them from targeting free-standing personnel. A fuller discussion is available at: http://thebulletin.org/us-killer-robot-policy-full-speed-ahead.

6 Responses to DoD Directive on Autonomy in Weapon Systems

  1. Website June 16, 2015 at 23:28 UTC #

    I do not know if it’s just me or if everybody else experiencing issues with your website. It seems like some of the written text within your posts are running off the screen. Can somebody else please comment and let me know if this is happening to them as well? This could be a issue with my browser because I’ve had this happen previously. Thanks|

Trackbacks/Pingbacks

  1. Parsing the DOD’s Directive on Autonomous Weapons » Duck of Minerva - November 27, 2012

    […] association of scientists and philosophers with varying views on the ethics of autonomous weaponry, writes at the ICRAC blog: This policy is basically, “Let the machines target other machines; Let men target men.” Since […]

  2. US killer robot policy: Full speed ahead | 1.0 Human - September 24, 2013

    […] DoD Directive 3000.09 was released in Nov. 2012, I had posted a somewhat less discerning view at icrac.net. Like many people, I did not at first fully understand how the Directive was constructed, and what […]

  3. Faut-il interdire les robots létaux autonomes ? | Dommages civils - November 16, 2013

    […] le Département de la Défense américain l’autonomie (supervisée) est envisagée comme une collaboration entre l’Homme et la […]

  4. Technocracy VII: DARPA’s Technophilia (1) | INFRAKSHUN - May 28, 2015

    […] to assuage the minds of those ready for such meaningless PR, and stated in Nov. 2012 Defence Department policy statement: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and […]

  5. Remote Control Blog: UN meeting on autonomous weapons - June 23, 2015

    […] 2014 and 2015 meetings. ICRAC’s interpretation of the DoD policy was that it was designed to “green-light” weapon systems able to select and engage human targets. Together with 272 experts in computer […]

Leave a Reply