Operator Responsibility for Outcomes: A Demonstration of the ResQu Model

In systems with advanced automation, human responsibility for outcomes becomes equivocal. We developed the Responsibility Quantification (ResQu) model to compute a measure of operator responsibility (Douer & Meyer, 2020) and compared it to observed and subjective levels of responsibility (Douer & Meyer, 2019). We used the model to calculate operators’ objective responsibility in a common fault event in the control room in a dairy factory. We compared the results to the subjective assessments made by different functions in the diary. The capabilities of the automation greatly exceeded those of the human, and the operator should comply with the indications of the automation. Thus, the objective causal human responsibility is 0. Outside observers, such as managers, assigned much higher responsibility to the operator, possibly holding operators responsible for adverse outcomes in situations in which they rightly trusted the automation.

[1]  Joachim Meyer,et al.  Judging One’s Own or Another Person’s Responsibility in Interactions With Automation , 2020, Hum. Factors.

[2]  Joachim Meyer,et al.  Theoretical, Measured, and Subjective Responsibility in Aided Decision Making , 2019, ACM Trans. Interact. Intell. Syst..

[3]  Joachim Meyer,et al.  The Responsibility Quantification Model of Human Interaction With Automation , 2018, IEEE Transactions on Automation Science and Engineering.

[4]  L. Ross From the Fundamental Attribution Error to the Truly Fundamental Attribution Error and Beyond: My Research Journey , 2018, Perspectives on psychological science : a journal of the Association for Psychological Science.

[5]  Jeroen van den Hoven,et al.  Meaningful Human Control over Autonomous Systems: A Philosophical Account , 2018, Front. Robot. AI.

[6]  Rachel A. Haga,et al.  Toward meaningful human control of autonomous weapons systems through function allocation , 2015, 2015 IEEE International Symposium on Technology and Society (ISTAS).

[7]  Rebecca Crootof,et al.  The Killer Robots Are Here: Legal and Policy Implications , 2014 .

[8]  Karen M. Feigh,et al.  Measuring Human-Automation Function Allocation , 2014 .

[9]  Deborah G. Johnson,et al.  Negotiating autonomy and responsibility in military robots , 2014, Ethics and Information Technology.

[10]  Abdul V. Roudsari,et al.  Automation bias: a systematic review of frequency, effect mediators, and mitigators , 2012, J. Am. Medical Informatics Assoc..

[11]  Bonnie Docherty,et al.  Losing Humanity : The Case Against Killer Robots , 2012 .

[12]  Nicole A. Vincent A Structured Taxonomy of Responsibility Concepts , 2010 .

[13]  David Arnott,et al.  Cognitive biases and decision support systems development: a design science approach , 2006, Inf. Syst. J..

[14]  Mary L. Cummings,et al.  Automation and Accountability in Decision Support System Interface Design , 2006 .

[15]  Andreas Matthias,et al.  The responsibility gap: Ascribing responsibility for the actions of learning automata , 2004, Ethics and Information Technology.

[16]  Ryan Shaun Joazeiro de Baker,et al.  Detecting Student Misuse of Intelligent Tutoring Systems , 2004, Intelligent Tutoring Systems.

[17]  P. Andrews,et al.  The psychology of social chess and the evolution of attribution mechanisms: explaining the fundamental attribution error. , 2001, Evolution and human behavior : official journal of the Human Behavior and Evolution Society.

[18]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.