Toward ensuring ethical behavior from autonomous systems: a case-supported principle-based paradigm

Purpose – This paper aims to propose a paradigm of case-supported principle-based behavior (CPB) to help ensure ethical behavior of autonomous machines. The requirements, methods, implementation and evaluation components of the CPB paradigm are detailed. Design/methodology/approach – The authors argue that ethically significant behavior of autonomous systems can be guided by explicit ethical principles abstracted from a consensus of ethicists. Particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action are used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Findings – Such a consensus, along with its corresponding principle, is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Practical implication...

[1]  Michael Anderson,et al.  MedEthEx: A Prototype Medical Ethics Advisor , 2006, AAAI.

[2]  Joachim Diederich,et al.  Rule Extraction from Support Vector Machines: An Introduction , 2008, Rule Extraction from Support Vector Machines.

[3]  Thomas M. Powers Prospects for a Kantian Machine , 2006, IEEE Intelligent Systems.

[4]  Kimberly M. Trebon There is No "I , 2015 .

[5]  Bart Baesens,et al.  Rule Extraction from Support Vector Machines: An Overview of Issues and Application in Credit Scoring , 2008, Rule Extraction from Support Vector Machines.

[6]  Susan Leigh Anderson,et al.  Robot be good. , 2010, Scientific American.

[7]  I. Kant,et al.  Groundwork for the Metaphysics of Morals , 2002 .

[8]  Bruce M. McLaren,et al.  Extensionally defining principles and cases in ethics: An AI model , 2003, Artif. Intell..

[9]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[10]  Alan Bundy,et al.  Representation as a Fluent: An AI Challenge for the Next Half Century , 2006, IEEE Intelligent Systems.

[11]  M. Mitchell Waldrop,et al.  A Question of Responsibility , 1987, AI Mag..

[12]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[13]  Colin Allen,et al.  Prolegomena to any future artificial moral agent , 2000, J. Exp. Theor. Artif. Intell..

[14]  Michael Anderson,et al.  GenEth: a general ethical dilemma analyzer , 2014, AAAI.

[15]  Saso Dzeroski,et al.  Inductive Logic Programming: Techniques and Applications , 1993 .

[16]  M. Mitchell Waldrop,et al.  Man-Made Minds: The Promise of Artificial Intelligence, Mitchell M. Waldrop. 1988. Walker and Company, New York. 280 pages. Index. ISBN: 0-8027-0899-4. $22.95 , 1988 .

[17]  Marcello Guarini,et al.  Particularism and the Classification and Reclassification of Moral Cases , 2006, IEEE Intelligent Systems.

[18]  Luís Moniz Pereira,et al.  Modelling morality with prospective logic , 2007, Int. J. Reason. based Intell. Syst..

[19]  J. Gips Towards the ethical robot , 1995 .

[20]  M McLarenBruce Extensionally defining principles and cases in ethics , 2003 .

[21]  Selmer Bringsjord,et al.  Toward a General Logicist Methodology for Engineering Ethically Correct Robots , 2006, IEEE Intelligent Systems.

[22]  Christopher Grau,et al.  There Is No "I" in "Robot": Robots and Utilitarianism , 2006, IEEE Intelligent Systems.

[23]  A. F. Umar Khan,et al.  The ethics of autonomous learning systems , 1995 .

[24]  Mirosław Żelazny,et al.  The Groundwork of the Metaphysic of Morals , 2011 .

[25]  Alexa Zellentin,et al.  3 The right and the good , 2012 .