The HERA approach to morally competent robots

To address the requirement for autonomous moral decision making, we introduce a software library for modeling hybrid ethical reasoning agents (short: HERA). The goal of the HERA project is to provide theoretically well-founded and practically usable logic-based machine ethics tools for implementation in robots. The novelty is that HERA implements multiple ethical principles like utilitarianism, the principle of double effect, and a Pareto-inspired principle. These principles can be used to automatically assess moral situations represented in a format we call causal agency models. We discuss how to model moral situations using our approach, and how it can cope with uncertainty about moral values. Finally, we briefly outline the architecture of our robot IMMANUEL, which implements HERA and is able to explain ethical decisions to humans.

[1]  Ronald C. Arkin,et al.  Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture part I: Motivation and philosophy , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  Wendell Wallach,et al.  Why Machine Ethics? , 2006, IEEE Intelligent Systems.

[3]  P. Foot The Problem of Abortion and the Doctrine of the Double Effect , 2020, The Doctrine of Double Effect.

[4]  Matthias Scheutz,et al.  Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Michael Fisher,et al.  Formal verification of ethical choices in autonomous systems , 2016, Robotics Auton. Syst..

[6]  Matthias Scheutz,et al.  When will people regard robots as morally competent social partners? , 2015, 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[7]  Felix Lindner,et al.  Discussions about lying with an ethical reasoning robot , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[8]  Felix Lindner Soziale Roboter und soziale Räume: eine Affordanz-basierte Konzeption zum rücksichtsvollen Handeln , 2015 .

[9]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[10]  Matthias Scheutz,et al.  Value Alignment or Misalignment - What Will Keep Systems Accountable? , 2017, AAAI Workshops.

[11]  Michael L. Littman,et al.  Reinforcement Learning as a Framework for Ethical Decision Making , 2016, AAAI Workshop: AI, Ethics, and Society.

[12]  Ronald C. Arkin Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture - Part 2: Formalization for Ethical Control , 2008, AGI.

[13]  Stuart Armstrong,et al.  Motivated Value Selection for Artificial Agents , 2015, AAAI Workshop: AI and Ethics.

[14]  Martin Mose Bentzen,et al.  The Principle of Double Effect Applied to Ethical Dilemmas of Social Robots , 2016, Robophilosophy/TRANSOR.

[15]  Felix Lindner,et al.  The Hybrid Ethical Reasoning Agent IMMANUEL , 2017, HRI.

[16]  Marco Ragni,et al.  Perceived Difficulty of Moral Dilemmas Depends on Their Causal Structure: A Formal Model and Preliminary Results , 2017, CogSci.