Responsible Agency Through Answerability: Cultivating the Moral Ecology of Trustworthy Autonomous Systems

The decades-old debate over so-called ‘responsibility gaps’ in intelligent systems has recently been reinvigorated by rapid advances in machine learning techniques that are delivering many of the capabilities of machine autonomy that Matthias [1] originally anticipated. The emerging capabilities of intelligent learning systems highlight and exacerbate existing challenges with meaningful human control of, and accountability for, the actions and effects of such systems. The related challenge of human ‘answerability’ for system actions and harms has come into focus in recent literature on responsibility gaps [2, 3]. We describe a proposed interdisciplinary approach to designing for answerability in autonomous systems, grounded in an instrumentalist framework of ‘responsible agency cultivation’ drawn from moral philosophy and cognitive sciences as well as empirical results from structured interviews and focus groups in the application domains of health, finance and government. We outline a prototype dialogue agent informed by these emerging results and designed to help bridge the structural gaps in organisations that typically impede the human agents responsible for an autonomous sociotechnical system from answering to vulnerable patients of responsibility.

[1]  Peter Königs Artificial intelligence and responsibility gaps: what is the problem? , 2022, Ethics and Information Technology.

[2]  Maximilian Kiener Can we Bridge AI’s responsibility gap at Will? , 2022, Ethical Theory and Moral Practice.

[3]  Barbara A. Barry,et al.  Patient apprehensions about the use of artificial intelligence in healthcare , 2021, npj Digital Medicine.

[4]  Daniel W. Tigard Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers , 2021, Science and Engineering Ethics.

[5]  M. Sand,et al.  Responsibility beyond design: Physicians’ requirements for ethical medical AI , 2021, Bioethics.

[6]  Rubén Mancha,et al.  From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation , 2020, Michigan Technology Law Review.

[7]  Pouyan Esmaeilzadeh,et al.  Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives , 2020, BMC Medical Informatics and Decision Making.

[8]  Alice Xiang,et al.  Machine Learning Explainability for External Stakeholders , 2020, ArXiv.

[9]  Daniel W. Tigard There Is No Techno-Responsibility Gap , 2020, Philosophy & Technology.

[10]  Mark Coeckelbergh,et al.  Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability , 2019, Science and Engineering Ethics.

[11]  Johannes Himmelreich Responsibility for Killer Robots , 2019, Ethical Theory and Moral Practice.

[12]  Antony Duff,et al.  Legal and Moral Responsibility , 2009 .

[13]  Lyria Bennett Moses,et al.  Recurring Dilemmas: Law's Race to Keep Up with Technological Change , 2006 .

[14]  Andreas Matthias,et al.  The responsibility gap: Ascribing responsibility for the actions of learning automata , 2004, Ethics and Information Technology.

[15]  Deborah G. Johnson Software Agents, Anticipatory Ethics, and Accountability , 2011 .

[16]  V. Braun,et al.  Please Scroll down for Article Qualitative Research in Psychology Using Thematic Analysis in Psychology , 2022 .