I argue that there is a gap between so-called “ethical reasoners” and “ethical-decision makers” that can’t be bridged by simply giving an ethical reasoner decision-making abilities. Ethical reasoning qua reasoning is distinguished from other sorts of reasoning mainly by being incredibly difficult, because it involves such thorny problems such as analogical reasoning, and deciding the applicability of imprecise precepts and resolving conflicts among them. The ability to do ethical-decision making, however, requires knowing what an ethical conflict is, i.e., a clash between self-interest and what ethics prescribes. I construct a fanciful scenario in which a program could find itself in what seems like such a conflict, but argue that in any such situation the program’s “predicament” would not count as a real ethical conflict. Hence, for now it is unclear how even resolving all of the difficult problems surrounding ethical reasoning would yield a theory of “machine ethics.”
[1]
Dana S. Nau,et al.
Current Trends in Automated Planning
,
2007,
AI Mag..
[2]
Michael Anderson,et al.
Machine Ethics: Creating an Ethical Intelligent Agent
,
2007,
AI Mag..
[3]
James H. Moor,et al.
The Nature, Importance, and Difficulty of Machine Ethics
,
2006,
IEEE Intelligent Systems.
[4]
Machine Ethics and Human Ethics : A Critical View
,
2005
.
[5]
P. Salmon.
The Nobel Peace Prize and the laureates: An illustrated biographical history 1901-2001
,
2002
.
[6]
Kathryn B. Laskey,et al.
Metareasoning and the Problem of Small Worlds
,
1994,
IEEE Trans. Syst. Man Cybern. Syst..
[7]
Irwin Abrams.
The Nobel Peace Prize and the laureates : an illustrated biographical history, 1901-1987
,
2001
.