On 13 February 2010, Allvoices published a piece that discusses why military robotics is on the rise, quoting ICRAC’s Noel Sharkey and also referring to other ICRAC members and their critical stance:
There are several reasons why the Department of Defense is shifting its focus away from conventional weapons and towards highly advanced robotic systems. For starters, as UK Professor Noel Sharkey notes, robots are cheap to manufacture, require less staff and, according to the Navy, do better in complex missions. Furthermore, on top of not having to give an annual salary for human soldiers, robots save the government a tremendous amount of money. The funds stemming from the budget relegated to soldiers for recruiting, training, housing, and feeding, as well as health-care and death benefits, can in-turn be invested elsewhere, like to robotic innovation. A parallel argument is that robots help remove ‘the fog of war’ and, as Max Frisch puts it, ‘organize the universe so that man doesn’t have to experience it’; ‘it’, in this context, means warfare.
The article also touches upon the notion of equipping unmanned military systems with an ethical governor:
One scientist working under contract from the US Army, Professor Ronald Arkin of Georgia Tech, is currently tackling this challenge by designing a software program called the ‘Robot Governor.’ In his book Governing Lethal Behavior, he suggests that his ‘Robot Governor’
‘ …can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear…[They can be built] without anger or recklessness… [They can be made] to be invulnerable to the psychological problem of ‘scenario fulfillment…which causes human beings to absorb new information more easily if it agrees with their pre-existing ideas.’
Aside from Professor Arkin’s recognizable genius, not all scientists involved in researching robotic technology approve of autonomous systems. Professor Sharkey, such as, believes that the ‘Robot Governor’ is a ‘good idea in principle. Unfortunately, it’s doomed to failure…’
The founders of the International Committee for Robots Arms Control (ICRAC), physicist Jürgen Altmann of Dortmund University of Technology, Germany; Robert Sparrow of the Centre for Human Bioethics, Monash University, near Melbourne, Australia; and philosopher Peter Asaro of Rutgers University in New Brunswick, New Jersey, fear that the next generation of robots will be mistakenly trusted with life-or-death situations. Selmer Bringsjord, another critic, is concerned that ‘If we give robots the power to do nasty things, we have to use logic to teach them to do unethical things.’ ICRAC, with the support of others like Bringsjord and Sharkey, are seeking an international treaty to limit their use.
The full text of the article can be found here.