Can Robots be Moral?

Recent philosophical discussion concerning robots has been largely preoccupied with questions such as "can robots think, know, feel, or learn?" "can they be conscious, teleological, and self-adaptive?"; "can robots be in principle psychologically and intellectually isomorphic to men?"' Considerably less attention has been paid meanwhile to the question whether robots can be moral. Since the latter problem seems to me rather intimately connected with the ones extensively discussed, I would like to raise it here in an attempt to carry the discussion to its logical conclusion. The thesis of this paper is that if there are no magic descriptive terms-intelligence, consciousness, purposiveness, etc. -predicable exclusively of men but not of robots, then there are no such moral terms either. If men and machines coexist in a natural continuum in which there are no gaps, quantum jumps, or insurmountable barriers preventing the assimilation of the one to the other, then they also coexist in a moral continuum in which only relative but never absolute distinctions can be made between human and machine morality. I will argue this thesis by raising the question whether robots can be moral in two stages: (1) Can robots act morally? (2) Can we, without absurdity, treat robots as moral agents? The answer to these questions will be given, not in terms of a new "robot morality," but in terms of a few traditional ethical theories. To make these questions at least initially plausible our robots will have to be imagined to be much more sophisticated than any single machine already existing. At the same time, for all their complexity, they are not to have any capabilities other than the ones computer scientists and cyberneticists like Turing, Wiener, Ashby, Arbib, Pask, and Uttley, for example, have argued to be, if not already