Non-monotonic Resolution of Conflicts for Ethical Reasoning

This chapter attempts to specify some of the requirements of ethical robotic systems. It begins with a short story by John McCarthy entitled, “The Robot and the Baby,” that shows how difficult it is for a rational robot to be ethical. It then characterizes the different types of “ethical robots” to which this approach is relevant and the nature of ethical questions that are of concern. The second section distinguishes between the different aspects of ethical systems and attempts to focus on ethical reasoning. First, it shows that ethical reasoning is essentially non-monotonic and then that it has to consider the known consequences of actions, at least if we are interested in modeling the consequentialist ethics. The two last sections, i.e., the third and the fourth, present different possible implementations of ethical reasoners, one being based on ASP (answer set programming) and the second on the BDI (belief, desire, intention) framework for programming agents.

[1]  Anand S. Rao,et al.  BDI Agents: From Theory to Practice , 1995, ICMAS.

[2]  J. Bentham An Introduction to the Principles of Morals and Legislation , 1945, Princeton Readings in Political Thought.

[3]  Mehdi Dastani,et al.  Goals in conflict: semantic foundations of goals in agent programming , 2009, Autonomous Agents and Multi-Agent Systems.

[4]  Chitta Baral,et al.  Knowledge Representation, Reasoning and Declarative Problem Solving , 2003 .

[5]  W. van der Hoek,et al.  Agent Programming with Declarative Goals , 2000, ATAL.

[6]  Brian F. Chellas Modal Logic: Normal systems of modal logic , 1980 .

[7]  Anand S. Rao,et al.  Modeling Rational Agents within a BDI-Architecture , 1997, KR.

[8]  Selmer Bringsjord,et al.  Toward a General Logicist Methodology for Engineering Ethically Correct Robots , 2006, IEEE Intelligent Systems.

[9]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[10]  Jörg Hansen,et al.  Deontic logics for prioritized imperatives , 2006, Artificial Intelligence and Law.

[11]  C. Allen,et al.  Moral Machines: Teaching Robots Right from Wrong , 2008 .

[12]  Koen V. Hindriks,et al.  A verification framework for agent programming with declarative goals , 2007, J. Appl. Log..

[13]  B. V. Fraassen Values and the Heart's Command , 1973 .

[14]  Jean-Gabriel Ganascia,et al.  Grafting Norms onto the BDI Agent Model , 2015, A Construction Manual for Robots' Ethical Systems.

[15]  Jean-Gabriel Ganascia,et al.  Modelling ethical rules of lying with Answer Set Programming , 2007, Ethics and Information Technology.

[16]  Jérôme Lang,et al.  Expressive Power and Succinctness of Propositional Languages for Preference Representation , 2004, KR.

[17]  Allen Newell,et al.  The Knowledge Level , 1989, Artif. Intell..

[18]  J-J.Ch. Meyer,et al.  The Paradoxes of Deontic Logic Revisited: A Computer Science Perspective (Or: Should computer scientists be bothered by the concerns of philosophers?) , 1994 .

[19]  Jörg Hansen The Paradoxes of Deontic Logic: Alive and Kicking , 2008 .

[20]  I. Kant,et al.  Groundwork for the Metaphysics of Morals , 2002 .

[21]  James William Forrester,et al.  Gentle Murder, or the Adverbial Samaritan , 1984 .

[22]  Vladimir Lifschitz,et al.  Answer Set Programming , 2019 .

[23]  Lou Goble,et al.  A logic for deontic dilemmas , 2005, J. Appl. Log..

[24]  Gilbert Harman Explaining Value: and Other Essays in Moral Philosophy , 2000 .

[25]  John F. Horty,et al.  Nonmonotonic Foundations for Deontic Logic , 1997 .

[26]  Roderick M. Chisholm,et al.  Contrary-To-Duty Imperatives and Deontic Logic , 1963 .

[27]  E. Guizzo,et al.  The man who made a copy of himself , 2010, IEEE Spectrum.

[28]  Andrews Reath,et al.  Immanuel Kant: Critique of Practical Reason: Frontmatter , 1997 .

[29]  Gerhard Brewka Reasoning about Priorities in Default Logic , 1994, AAAI.

[30]  John F. Horty,et al.  Moral dilemmas and nonmonotonic logic , 1994, J. Philos. Log..