Toward Non-Intuition-Based Machine and Artificial Intelligence Ethics: A Deontological Approach Based on Modal Logic

We propose a deontological approach to machine (or AI) ethics that avoids some weaknesses of an intuition-based system, such as that of Anderson and Anderson. In particular, it has no need to deal with conflicting intuitions, and it yields a more satisfactory account of when autonomy should be respected. We begin with a "dual standpoint'' theory of action that regards actions as grounded in reasons and therefore as having a conditional form that is suited to machine instructions. We then derive ethical principles based on formal properties that the reasons must exhibit to be coherent, and formulate the principles using quantified modal logic. We conclude that deontology not only provides a more satisfactory basis for machine ethics but endows the machine with an ability to explain its actions, thus contributing to transparency in AI.

[1]  L. Forrow,et al.  Deciding for others: The ethics of surrogate decision‐making , 1990 .

[2]  Joshua Rust,et al.  The Behavior of Ethicists , 2016 .

[3]  Walter Sinnott-Armstrong,et al.  Moral Intuitionism Meets Empirical Psychology , 2005 .

[4]  T. Nagel The view from nowhere , 1987 .

[5]  Susan Leigh Anderson,et al.  A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a Dialogue with Ethicists , 2011 .

[6]  Dan W. Brockt,et al.  The Theory of Justice , 2017 .

[7]  Wendell Wallach,et al.  Machine morality: bottom-up and top-down approaches for modelling human moral faculties , 2008, AI & SOCIETY.

[8]  Nancy S. Jecker,et al.  The Sources of Normativity , 2001 .

[9]  Philip Swenson,et al.  Reasons-responsiveness and degrees of responsibility , 2012, Philosophical Studies.

[10]  D. Davidson Actions, Reasons, And Causes , 1980 .

[11]  Michael Anderson,et al.  Towards a Principle-Based Healthcare Agent , 2015 .

[12]  Michael Anderson,et al.  GenEth: a general ethical dilemma analyzer , 2014, AAAI.

[13]  Andreas Theodorou,et al.  Robot transparency, trust and utility , 2016, Connect. Sci..

[14]  L. Burton Intention , 2011 .

[15]  I. Kant,et al.  Foundations of the Metaphysics of Morals , 2020, Kant and the Spirit of Critique.

[16]  O. O’neill Acting on Principle: An Essay on Kantian Ethics , 2013 .

[17]  D. Justin,et al.  Reasons-responsiveness and degrees of responsibility , 2013 .

[18]  Michael Anderson,et al.  Toward ensuring ethical behavior from autonomous systems: a case-supported principle-based paradigm , 2015, Ind. Robot.

[19]  Akeel Bilgrami,et al.  Self-Knowledge and Resentment , 2006 .

[20]  Vincent Berenz,et al.  A Value Driven Agent: Instantiation of a Case-Supported Principle-Based Behavior Paradigm , 2017, AAAI Workshops.

[21]  Michael Anderson,et al.  An Approach to Computing Ethics , 2006, IEEE Intelligent Systems.

[22]  Andreas Theodorou,et al.  What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems , 2016, IJCAI 2016.

[23]  Michael Anderson,et al.  Machine Ethics: Creating an Ethical Intelligent Agent , 2007, AI Mag..

[24]  Stephen D. Schwarz The Right and the Good , 1992 .

[25]  Jonathan J. Sanford Experiments in Ethics , 2010 .

[26]  Davide Castelvecchi,et al.  Can we open the black box of AI? , 2016, Nature.

[27]  Susan Leigh Anderson,et al.  Robot be good. , 2010, Scientific American.

[28]  Joshua Knobe,et al.  Experimental philosophy. , 2012, Annual review of psychology.

[29]  S. Athar Principles of Biomedical Ethics , 2011, The Journal of IMA.