The Responsibility Quantification Model of Human Interaction With Automation

Intelligent systems and advanced automation are involved in information collection and evaluation, decision-making, and the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human causal responsibility is particularly important when systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWSs). Using information theory, we developed a responsibility quantification (ResQu) model of human causal responsibility in intelligent systems and demonstrated its applications on decisions regarding AWS. The analysis reveals that comparative human responsibility for outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in advanced automation. The current model assumes stationarity, full knowledge regarding the characteristic of the human and automation, and ignores temporal aspects. It is an initial step toward the development of a comprehensive responsibility model that will make it possible to quantify human causal responsibility. The model can serve as an additional tool in the analysis of system design alternatives and policy decisions regarding human causal responsibility, providing a novel, quantitative perspective on these matters. Note to Practitioners—We developed a theoretical model and a quantitative measure for computing the comparative human causal responsibility in the interaction with intelligent systems and advanced automation. Our responsibility measure can be applied by practitioners (system designers, regulators, and so on) to estimate user responsibility in specific system configurations. This can serve as an additional tool in the comparison between alternative system designs or deployment policies, by relating different automation design options to their predicted effect on the users’ responsibility. To apply the model (which is based on entropy and mutual information) to real-world systems, one must deduce the underlying distributions, either from known system properties or from empirical observations, taken over time. The initial version of the model we present here assumes that the combined human–automation system is stationary and ergodic. Real-world systems may not be stationary and ergodic or cannot be observed sufficiently to allow accurate estimates of the required input of multivariate probabilities, in which case the computed responsibility values should be treated with caution. Nevertheless, the construction of a ResQu information flow model, combined with sensitivity analyses of how changes in the input probabilities and assumptions affect the responsibility measure, will often reveal important qualitative properties and supply valuable insights regarding the general level of meaningful human involvement and comparative responsibility in a system.

[1]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[2]  William H. Sanders,et al.  The Multiple-Asymmetric-Utility System Model: A Framework for Modeling Cyber-Human Systems , 2011, 2011 Eighth International Conference on Quantitative Evaluation of SysTems.

[3]  Stephen Goose The case for banning killer robots , 2015, Commun. ACM.

[4]  Jeroen van den Hoven,et al.  Meaningful Human Control over Autonomous Systems: A Philosophical Account , 2018, Front. Robot. AI.

[5]  G. Williams Causation in the Law , 1961, The Cambridge Law Journal.

[6]  Thomas Hellström,et al.  On the moral responsibility of military robots , 2013, Ethics and Information Technology.

[7]  Maarten Sierhuis,et al.  Coactive design , 2014, HRI 2014.

[8]  Amy R. Pritchett,et al.  Aviation Automation: General Perspectives and Specific Guidance for the Design of Modes and Alerts , 2009 .

[9]  David Garlan,et al.  Reasoning about Human Participation in Self-Adaptive Systems , 2015, 2015 IEEE/ACM 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems.

[10]  Rangaraj M. Rangayyan,et al.  A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs , 2007, J. Frankl. Inst..

[11]  Michael Himmelsbach,et al.  Autonomous Ground Vehicles—Concepts and a Path to the Future , 2012, Proceedings of the IEEE.

[12]  Tobias Gerstenberg,et al.  Causal Conceptions in Social Explanation and Moral Evaluation , 2015, Perspectives on psychological science : a journal of the Association for Psychological Science.

[13]  Joachim Meyer,et al.  Effects of Warning Validity and Proximity on Responses to Warnings , 2001, Hum. Factors.

[14]  Joseph Y. Halpern,et al.  Responsibility and Blame: A Structural-Model Approach , 2003, IJCAI.

[15]  Neil A. Macmillan,et al.  Detection Theory: A User's Guide , 1991 .

[16]  Mark Coeckelbergh Moral Responsibility, Technology, and Experiences of the Tragic: From Kierkegaard to Offshore Engineering , 2012, Sci. Eng. Ethics.

[17]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[18]  Da-Yin Liao Automation and Integration in Semiconductor Manufacturing , 2010 .

[19]  Karen M. Feigh,et al.  Requirements for Effective Function Allocation , 2014 .

[20]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[21]  Deborah G. Johnson Technology with No Human Responsibility? , 2015 .

[22]  Bonnie Docherty,et al.  Losing Humanity : The Case Against Killer Robots , 2012 .

[23]  Raj Madhavan,et al.  Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues] , 2018, IEEE Robotics & Automation Magazine.

[24]  H. Theil On the Estimation of Relationships Involving Qualitative Variables , 1970, American Journal of Sociology.

[25]  N. Sharkey Saying ‘No!’ to Lethal Autonomous Targeting , 2010 .

[26]  Bruno Siciliano,et al.  Autonomy in surgical robots and its meaningful human control , 2019, Paladyn J. Behav. Robotics.

[27]  Deborah G. Johnson,et al.  Negotiating autonomy and responsibility in military robots , 2013, Ethics and Information Technology.

[28]  Joachim Meyer,et al.  Defining and measuring physicians' responses to clinical reminders , 2009, J. Biomed. Informatics.

[29]  I POLLACK,et al.  On the Performance of a Combination of Detectors , 1964, Human factors.

[30]  Ariel Guersenzvaig Autonomous Weapon Systems: Failing the Principle of Discrimination , 2018, IEEE Technology and Society Magazine.

[31]  Thomas S. Ulen,et al.  An Economic Case for Comparative Negligence , 1986 .

[32]  Maarten Sierhuis,et al.  The Fundamental Principle of Coactive Design: Interdependence Must Shape Autonomy , 2010, COIN@AAMAS&MALLOW.

[33]  Robert Sparrow Predators or plowshares? arms control of robotic weapons , 2009, IEEE Technology and Society Magazine.

[34]  Merel Noorman,et al.  Responsibility Practices and Unmanned Military Technologies , 2014, Sci. Eng. Ethics.

[35]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[36]  M. Alicke,et al.  Causal deviance and the ascription of intent and blame , 2019, Philosophical Psychology.

[37]  Tobias Gerstenberg,et al.  Finding fault: Causality and counterfactuals in group attributions , 2012, Cognition.

[38]  Kunio Doi,et al.  Computer-aided diagnosis in medical imaging: Historical review, current status and future potential , 2007, Comput. Medical Imaging Graph..

[39]  Roger C. Conant,et al.  Laws of Information which Govern Systems , 1976, IEEE Transactions on Systems, Man, and Cybernetics.

[40]  ArkinRonald The case for banning killer robots , 2015 .

[41]  Charles M. Jones,et al.  Does Algorithmic Trading Improve Liquidity? , 2010 .

[42]  M. C. Elish,et al.  Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction , 2019, Engaging Science, Technology, and Society.

[43]  Anne Gerdes,et al.  Lethal Autonomous Weapon Systems and Responsibility Gaps , 2016 .

[44]  Rebecca Crootof,et al.  The Killer Robots Are Here: Legal and Policy Implications , 2014 .

[45]  R. W. Wright,et al.  Causation in Tort Law , 1985 .

[46]  Thomas M. Powers,et al.  Computer Systems and Responsibility: A Normative Look at Technological Complexity , 2005, Ethics and Information Technology.

[47]  David Garlan,et al.  Evaluating Trade-Offs of Human Involvement in Self-Adaptive Systems , 2017 .

[48]  D. Woods Conflicts between Learning and Accountability in Patient Safety , 2014 .

[49]  Henri Theil,et al.  Statistical Decomposition Analysis: With Applications in the Social and Administrative Sciences , 1972 .

[50]  Rachel A. Haga,et al.  Toward meaningful human control of autonomous weapons systems through function allocation , 2015, 2015 IEEE International Symposium on Technology and Society (ISTAS).

[51]  Madeleine Clare Elish,et al.  Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation , 2015 .

[52]  M. L. Cummings Lethal Autonomous Weapons: Meaningful Human Control or Meaningful Human Certification? [Opinion] , 2019, IEEE Technol. Soc. Mag..

[53]  Nicole A. Vincent A Structured Taxonomy of Responsibility Concepts , 2010 .

[54]  Tobias Gerstenberg,et al.  Spreading the blame: The allocation of responsibility amongst multiple agents , 2010, Cognition.

[55]  Austin Wyatt,et al.  Charting great power progress toward a lethal autonomous weapon system demonstration point , 2020 .

[56]  T. Wickens Elementary Signal Detection Theory , 2001 .

[57]  M. Moore Causation and Responsibility: An Essay in Law, Morals, and Metaphysics , 2009 .

[58]  Christopher Cherry,et al.  PUNISHMENT AND RESPONSIBILITY: ESSAYS IN THE PHILOSOPHY OF LAW , 1969 .

[59]  Mary L. Cummings,et al.  Automation and Accountability in Decision Support System Interface Design , 2006 .

[60]  Ronald C. Arkin,et al.  The case for banning killer robots , 2015, Commun. ACM.

[61]  James Igoe Walsh,et al.  Political accountability and autonomous weapons , 2015 .

[62]  Kelly G. Shaver,et al.  The attribution of blame : causality, responsibility, and blameworthiness , 1985 .

[63]  Davide Castelvecchi,et al.  Can we open the black box of AI? , 2016, Nature.

[64]  Ro'i Zultan,et al.  Causal Responsibility and Counterfactuals , 2013, Cogn. Sci..

[65]  F. Cushman Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment , 2008, Cognition.

[66]  Karen M. Feigh,et al.  Measuring Human-Automation Function Allocation , 2014 .

[67]  Giulio Mecacci,et al.  Meaningful human control as reason-responsiveness: the case of dual-mode vehicles , 2019, Ethics and Information Technology.

[68]  Andreas Matthias,et al.  The responsibility gap: Ascribing responsibility for the actions of learning automata , 2004, Ethics and Information Technology.

[69]  David D. Woods,et al.  Cognitive Technologies: The Design of Joint Human-Machine Cognitive Systems , 1986, AI Mag..

[70]  Rachel Haga,et al.  Lost in Translation: Building a Common Language for Regulating Autonomous Weapons , 2016, IEEE Technology and Society Magazine.

[71]  Joachim Meyer,et al.  Theoretical, Measured, and Subjective Responsibility in Aided Decision Making , 2019, ACM Trans. Interact. Intell. Syst..

[72]  P. Asaro On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making , 2012, International Review of the Red Cross.

[73]  Mark Mulder,et al.  A Topology of Shared Control Systems—Finding Common Ground in Diversity , 2018, IEEE Transactions on Human-Machine Systems.

[74]  Bertram F. Malle,et al.  A Theory of Blame , 2014 .

[75]  Joachim Meyer,et al.  Conceptual Issues in the Study of Dynamic Hazard Warnings , 2004, Hum. Factors.