A Robust View of Machine Ethics

Should we be thinking of extending the UN Universal Declaration of Human Rights to include future humanoid robots? And should any such list of rights be accompanied by a list of duties incumbent on such robots (including, of course, their duty to respect human rights)? This presents a momentous ethical challenge for the coming era of proliferation of human-like agents. A robust response to such a challenge says that, unless such artificial agents are organisms rather than ‘mere’ machines, and are genuinely sentient (as well as rational), no sense can be made of the idea that they have inherent rights of moral respect from us or that they have inherent moral duties towards us. The further challenge would be to demonstrate that this robust response is wrong, and if so, why. The challenge runs especially deep, as certain plausible views on the basis of sentience, teleology and moral status in biologically-based forms of self-organization and autonomy, appear to lend support to the robust position. 1. Humanoid Rights: The Challenge We humans display morally flavoured emotions towards inanimate mechanisms and other entities which play special roles in our lives. Often these emotions are quite intense. Children pet and confide in their toys, treating them in turns as mock-allies or as enemies: many nascent moral attitudes and interactions are rehearsed in such play. We continue similar patterns of fantasy-level moral play in adult life, for example, feeling pride at a newly purchased appliance, vehicle, etc. – as if such acquisitions had somehow been directly responsible for their own design and assembly. Conversely, we vent our anger at them when they ‘misbehave’ – often acting out a little melodrama in which the faulty artefact has fiendishly plotted its malfunction expressly to slight or embarass us. In our more reflective moods we readily affirm that such affective states have no rational validity. The moral universe that we inhabit in our most reflectively sanitized moments has a very different shape from the moral universe delimited by our fantasy-lives. Within the consensual, metropolitan milieu of modern ‘civilized’ society, documents such as the UN Universal Declaration of Human Rights (1948) provide secular frameworks for moral agency and attitudes. Article 2 in that document affirms, for instance, that ‘Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, etc.’ ‘Everyone’ here is interpreted to mean all human beings, although the philosophical basis for the ‘universality’ of such rights is not made particularly clear within the Declaration. Many believe that some of the rights enshrined in such charters should, rationally, be extended to at least some species of non-human biological beings, using behavioural, cognitive and physiological similarities between them and us as a justification for such an extension. This issue of animal rights is tangled in controversy. Another kind of possible extension is to intelligent machines, particularly ones which exhibit rich human-like properties. Clearly a lot of the issues concerning the extension of the moral universe for artificial humanoids or androids will be the same as that concerning the moral status of some of the more advanced animal species, but there are important differences (Calverley 2005b). I wish to focus this discussion specifically on artificial humanoids (taking the term ‘humanoid’ quite widely. Such a limitation of the scope of the discussion involves leaving out a lot of possible kinds of cases that are in the vicinity of the present discussion, such as non-humanoid robots of various sorts, virtual agents, and various kinds of organism-machine hybrids – e.g. people who have received massive brain implants with no loss of, and possibly highly enhanced, functionality, not to mention supposed future full brain-to-silicon uploads. Also, we will be considering here only those humanoid artificially created agents, current and future, that are, because of their physical makeup, clearly agreed to be machines rather than organisms. The case of artificially produced creatures displaying such rich biological properties that they no longer merit being called machines (or which merit such a description only in the rather stilted way that we humans do), is a somewhat separate one, and will not be discussed here. The similarities between humans and even the most advanced kinds of computationally based humanoid robots likely to be practicable for some time, are in one way highly remote, just because of the enormous design-gaps between electronic technologies and naturally-occurring human physiology. However, in other ways the similarities are, at least potentially, very great. Thanks to mainstream AI techniques as well as more innovative research advances, humanoid robots may soon have a variety of linguistic, cognitive and structural capacities that are far closer to us than those possessed by any non-human animals. (There may also be areas of marked underperformance in robots, which stubbornly resist improvement, despite massive R&D effort.) Should we, then, be thinking, as some suggest, of extending the UN rights pretty soon to the varieties of humanoid robots that are likely to proliferate on the planet – say within the next century or so? And should any such list of rights be accompanied by a list of duties including, of course, the duty to respect human rights? Further, how do we account for the enormous potential variability in these artificial robotic agents – variations in appearance, behaviour and intelligence – even when keeping our discussion within the boundaries of the broadly humanoid paradigm? Coming to accept that machines may have responsibilities and rights – not to mention possible kinds of legal and economic status – is a rapidly up and coming ethical challenge which is likely to define in coming decades as great a socio-technological watershed as the arrival of the age of information and communications technologies within the last half-century. 2. The Robust Response Consider a certain response to that challenge – the Robust Response, as we might call it. The Robust Response proposes that (i) there is a crucial dichotomy between beings that possess organic and physiological characteristics, on the one hand, and ‘mere’ machines on the other; and, further, that (ii) it is appropriate to consider only a genuine organism (whether human or animal; whether naturally-occurring or artificially synthesized) as being a candidate for intrinsic moral status – so that nothing that is clearly on the machine side of the machineorganism divide can coherently be considered to have any intrinsic moral status. The Robust Response may come in many forms, but in a central variant it will revolve around the notion of sentience: this version of the view holds, additionally that (iii) only beings which are capable of feeling or phenomenal awareness could be genuine subjects of either moral concern or moral appraisal, and further that (iv) only biological organisms (whether naturally-occurring or artificially produced) have the capability to be genuinely sentient or conscious. The Robust attitude towards robots – be they ever so human-like in outward form and performance – will thus be that only beings whose inner constitution clearly enables genuine sentience or feeling to be identified deserve to be considered as moral subjects in either the sense of targets of moral concern or that of sources of moral expectation. Unless and until the technology of creating artificial biological organisms progresses to a stage where genuine sentience can be physiologically supported, no ‘mere’ machine, however human-like, intelligent and behaviourally rich its functionality allows it to be, can be seriously taken as having genuine moral status – either as a giver or receiver of moral action. (Note that it is compatible with holding such a rigid machine/organism demarcation that one adopts as liberal or as restrictive an attitude as one pleases towards accepting different kinds of non-human animal species into ‘our’ moral universe.) Supporters of the Robust view are likely to see those who dissent from the view as taking over-seriously the sentimentalist, fantasist proclivities that we all have – our tendencies, that is, towards child-like over-indulgence in affective responses to objects which do not objectively merit such responses. Such responses to machines may, the Robust view accepts, be all too natural, and no doubt they will need to be taken seriously in robot design and in planning the practicalities of human-robot interaction. Perhaps ‘quasi-moral relationships’ may need to be defined between computer-run robots and their human users, to make it easier for us, the human controllers, to modulate our relations with them. But on the Robust view, this could be for pragmatic reasons only: it would have no rational basis in the objective moral status of such robots, which would remain simply implements, of merely instrumental value, having at root only functional, rather than personal, status. Some time ago Peter Strawson introduced a distinction between two different kinds of attitudes that people may display to other human or non-human agents (Strawson 1974). On the one hand there are reactive attitudes, typified by emotions such as resentment, gratitude, censure, admiration, and other affective responses implying an attribution of responsibility to the agent of whom the attitude is held. On the other hand there are objective attitudes, displayed by us towards small children, animals, and humans who suffer from various kinds of mental deficit – in such cases we withhold attributions of responsibility and hence praise and blame. (To adopt an objective attitude to an individual when based on the attribution of diminished responsibility, in no way implies a diminutio