Algorithmic Moral Control of War Robots: Philosophical Questions

In a series of publications, Ronald Arkin and his team2 have proposed the concept of an ‘ethical governor’, which is supposed to effectively control and enforce the ethical use of lethal force by robots on the battlefield. The idea of an ethical governor, and more generally the concept of algorithmic (ie symbolic computational) control of robot morality, has had a great influence on both the engineering and the public discourse on robot ethics, and is often cited in general interest publications to justify the use of war robots and to counter critical questions about the moral issues relating to robot deployment on the battlefield. Science on msnbc.com reports: ‘Robot warriors will get a guide to ethics’,3 also echoed on the influential Communications of the ACM news site.4 Discovery News claims: ‘Robots warrior ethical guide in the works’,5 while Cnet.com military tech writes: ‘Killer robots can be taught ethics’.6 Headlines like these suggest that efficient (and sufficient) ethical control of war robots is nothing more than a technical matter, which, furthermore, has already been addressed successfully (‘can be taught’).