Safety Constraints and Ethical Principles in Collective Decision Making Systems

The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans’), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

[1]  J Figueira,et al.  Stochastic Programming , 1998, J. Oper. Res. Soc..

[2]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Stochastic Control , 1977, IEEE Transactions on Systems, Man, and Cybernetics.

[3]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[4]  Amitai Shenhav,et al.  Integrative Moral Judgment: Dissociating the Roles of the Amygdala and Ventromedial Prefrontal Cortex , 2014, The Journal of Neuroscience.

[5]  Toby Walsh,et al.  A Short Introduction to Preferences: Between AI and Social Choice , 2011 .

[6]  Ronen I. Brafman,et al.  CP-nets: A Tool for Representing and Reasoning withConditional Ceteris Paribus Preference Statements , 2011, J. Artif. Intell. Res..

[7]  Francesca Rossi,et al.  Bipolar Preference Problems: Framework, Properties and Solving Techniques , 2006, CSCLP.

[8]  Alberto Bemporad,et al.  Robust model predictive control: A survey , 1998, Robustness in Identification and Control.

[9]  J. Thomson The Trolley Problem , 1985 .

[10]  Thierry Vidal,et al.  Handling contingency in temporal constraint networks: from consistency to controllabilities , 1999, J. Exp. Theor. Artif. Intell..

[11]  J. Moor What Is Computer Ethics?* , 1985, The Ethics of Information Technologies.

[12]  Cheng Fang,et al.  Chance-Constrained Probabilistic Simple Temporal Problems , 2014, AAAI.

[13]  Cheng Fang,et al.  Resolving Over-Constrained Probabilistic Temporal Problems through Chance Constraint Relaxation , 2015, AAAI.

[14]  M Ono,et al.  Chance constrained finite horizon optimal control with nonconvex constraints , 2010, Proceedings of the 2010 American Control Conference.

[15]  Toby Walsh,et al.  Handbook of Constraint Programming , 2006, Handbook of Constraint Programming.

[16]  Michael Nikolaou,et al.  Chance‐constrained model predictive control , 1999 .

[17]  Francesca Rossi,et al.  Aggregating CP-nets with Unfeasible Outcomes , 2014, CP.

[18]  Joshua D. Greene The cognitive neuroscience of moral judgment and decision making. , 2014 .

[19]  Thomas Schiex,et al.  Soft Constraints , 2000, WLP.

[20]  C. Allen,et al.  Moral Machines: Teaching Robots Right from Wrong , 2008 .

[21]  Brian Charles Williams,et al.  Chance-Constrained Consistency for Probabilistic Temporal Plan Networks , 2014, ICAPS.