Death by algorithm is the ultimate indignity says 2 star general

Former Majory General Robert H. Latiff (and Patrick J. McCloskey) has stood up to be counted against the coming autonomous lethal robots.

Latiff and McCloskey point out the military benefits of the autonomous machines and then comes the but…

The problem is that robotic weapons eventually will make kill decisions on the battlefield with no more than a veneer of human control. Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.

When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster—which at least can be used for spare parts. In civilized societies, even our enemies possess inherent worth and are considered persons, a recognition that forms the basis of the Geneva Conventions and rules of military engagement.

Lethal autonomy also has grave implications for democratic society. The rule of law and human rights depend on an institutional and cultural cherishing of every individual regardless of utilitarian benefit. The 20th century became a graveyard for nihilistic ideologies that treated citizens as human fuel and fodder.

They speak very frankly about the recent US department of defense directive:

The kill decision is still subject to many layers of human command, and the U.S. Defense Department recently issued a directive stating that emerging autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Yet this seems more like wishful thinking than realistic doctrine. Military budget cuts are making robotic autonomy almost fiscally inevitable.

There is a solid criticism of Michael N. Schmitt’s (Chairman of the US Naval War College International Law Department) view that war machines can protect civilian and property as well as humans. “This assurance aside, it is far from clear whether robots can be programmed to distinguish between large children and small adults, and in general between combatants and civilians, especially in urban conflicts. Surely death by algorithm is the ultimate indignity.”

The conclusion of the article is strongly in line with the Campaign to Stop Killer Robots:

Time is running out for military decision makers, politicians and the public to set parameters for research and deployment that could form the basis for national policy and international treaties. The alternative is to blindly accept as inevitable whatever technology offers. Let’s not be robotic in our acquiescence.

Read the full article – With drone warfare, America approaches the Robot-rubicon

Trackbacks/Pingbacks

  1. Ethics for Wars to Come | All In One Boat - February 10, 2014

    […] here is Freeman’s overview of Latiff, the man, and his course and a WSJ article he wrote. Here, some excerpts from the WSJ […]

  2. Ethics for Wars to Come | The Last Dog Watch - February 11, 2014

    […] is Freeman’s overview of Latiff, the man, and his course and here a WSJ article he wrote. Here, some excerpts from the WSJ […]

  3. Pulling the plug on killer robots? | MCC Ottawa Office Notebook - April 2, 2015

    […] lives is minimized? Do we want to give human life over to computer codes? Or, as one military general put it, is “death by algorithm not the ultimate human […]