Contemporary technology creates a proliferation of nonhuman artificial entities such as robots and intelligent information systems. Sometimes they are called ‘artificial agents’. But are they agents at all? And if so, should they be considered as moral agents and be held morally responsible? They do things to us in various ways, and what happens can be and has to be discussed in terms of right and wrong, good or bad. But does that make them agents or moral agents? And who is responsible for the consequences of their actions? The designer? The user? The robot? Standard moral theory has difficulties in coping with these questions for several reasons. First, it generally understands agency and responsibility as individual and undistributed. I will not further discuss this issue here. Second, it is tailored to human agency and human responsibility, excluding non-humans. It makes a strong distinction between (humans as) subjects and objects, between humans and animals, between ends (aim, goal) and means (instrument), and sometimes between the moral and the empirical sphere. Moral agency is seen as an exclusive feature of (some) humans. But if non-humans (natural and artificial) have such an influence on the way we lead our lives, it is undesirable and unhelpful to exclude them from moral discourse. In this paper, I explore how we can include artificial agents in our moral discourse, without giving up the ‘folk’ intuition that humans are somehow special with regard to morality, that there is a special relation between humanity and morality—whatever that means. Giving up this view happens if we lower the threshold for moral agency (which I take Foridi and Sanders to do), or if we call artefacts ‘moral’ in virtue of what they do (which I take Verbeek to do in his interpretation of Latour and others) or in virtue of the value we ascribe to them (which I take Magnani to do). I propose an alternative route, which replaces the question about how ‘moral’ non-human agents really are by the question about the moral significance of appearance. Instead of asking about what kind of ‘mind’ or brain states non-humans really have to count as moral agents (approach 1), about what they really do to us (approach 2), or about what value they really have (approach 3), I propose to redirect our attention to the various ways in which non-humans, and in particular robots, appear to us as agents, and how they influence us in virtue of this appearance. Thus, I leave the question regarding the moral status of non-humans open and make room for a study of the moral significance of how humans perceive artificial non-humans such as robots and are influenced by that perception in their interaction with these entities and in their beliefs about these entities. In particular, I argue that humans are justified in ascribing virtual moral agency and moral responsibility to those nonhumans that appear similar to themselves—and to the extent that they appear so—and in acting according to this belief. Thinking about non-humans implies that we reconsider our views about humans. My project in that domain is to shift at least some of our philosophical attention in moral anthropology from what we really are (as opposed to nonhumans) to anthropomorphology: the human form, what we appear to be, and how other beings appear to us given (our projections and recreations of) the human form. I want to make plausible that it is not their intentional state, but their performance that counts morally, and that we can gain M. Coeckelbergh (&) Department of Philosophy, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands e-mail: m.coeckelbergh@utwente.nl
[1]
Luciano Floridi,et al.
On the Morality of Artificial Agents
,
2004,
Minds and Machines.
[2]
L. Magnani.
Morality in a Technological World by Lorenzo Magnani
,
2007
.
[3]
Ted Honderich,et al.
Punishment: the supposed justifications
,
1969
.
[4]
Mark Coeckelbergh,et al.
Imagination and Principles
,
2007
.
[5]
Mark L. Johnson.
Moral Imagination: Implications of Cognitive Science for Ethics
,
1993
.
[6]
Lorenzo Magnani.
Morality in a Technological World: Knowledge as Duty
,
2007
.
[7]
Steven H. Fesmire,et al.
John Dewey And Moral Imagination
,
2003
.
[8]
H. J. Paton,et al.
The moral law : Kant's groundwork of the metaphysic of morals : a new translation with analysis and notes
,
1961
.
[9]
P. Verbeek.
What Things Do: Philosophical Reflections on Technology, Agency, and Design
,
2005
.
[10]
B. Latour.
We Have Never Been Modern
,
1991
.
[11]
K. Himma.
Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?
,
2009,
Ethics and Information Technology.
[12]
John R. Searle,et al.
Minds, brains, and programs
,
1980,
Behavioral and Brain Sciences.
[13]
A. M. Turing,et al.
Computing Machinery and Intelligence
,
1950,
The Philosophy of Artificial Intelligence.