Human Trust-Based Feedback Control: Dynamically Varying Automation Transparency to Optimize Human-Machine Interactions

Aomation has become prevalent in the everyday lives of humans. However, despite significant technological advancements, human supervision and intervention are still necessary in almost all sectors of automation, ranging from manufacturing and transportation to disaster management and health care [1]. Therefore, it is expected that the future will be built around human?agent collectives [2] that will require efficient and successful interaction and coordination between humans and machines. It is well established that, to achieve this coordination, human trust in automation plays a central role [3]-[5]. For example, the benefits of automation are lost when humans override it due to a fundamental lack of trust [3], [5], and accidents may occur due to human mistrust in such systems [6]. Therefore, trust should be appropriately calibrated to avoid the disuse or misuse of automation [4].

[1]  Neera Jain,et al.  Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions , 2019, IEEE Transactions on Human-Machine Systems.

[2]  Victoria Alonso,et al.  System Transparency in Shared Autonomy: A Mini Review , 2018, Front. Neurorobot..

[3]  Alimohammad Shahri,et al.  Four reference models for transparency requirements in information systems , 2018, Requirements Engineering.

[4]  Mike Ananny,et al.  Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..

[5]  Siddhartha S. Srinivasa,et al.  Planning with Trust for Human-Robot Collaboration , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[6]  E. Navarro‐Pardo,et al.  ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density , 2017, Front. Psychol..

[7]  Lilly Irani,et al.  Amazon Mechanical Turk , 2018, Advances in Intelligent Systems and Computing.

[8]  Neera Jain,et al.  Dynamic modeling of trust in human-machine interactions , 2017, 2017 American Control Conference (ACC).

[9]  Chao Deng,et al.  Driver’s Cognitive Workload and Driving Performance under Traffic Sign Information Exposure in Complex Environments: A Case Study of the Highways in China , 2017, International journal of environmental research and public health.

[10]  Ning Wang,et al.  The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams , 2016, AAMAS.

[11]  Ufuk Topcu,et al.  Synthesis of Human-in-the-Loop Control Protocols for Autonomous Systems , 2016, IEEE Transactions on Automation Science and Engineering.

[12]  Ning Wang,et al.  Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[13]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[14]  Matt Richtel,et al.  Google’s Driverless Cars Run into Problem: Cars with Drivers , 2015 .

[15]  Ning Wang,et al.  Intelligent Agents for Virtual Simulation of Human-Robot Interaction , 2015, HCI.

[16]  Gregory Dudek,et al.  OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  Nicholas R. Jennings,et al.  Human-agent collectives , 2014, CACM.

[18]  D. Bates,et al.  Fitting Linear Mixed-Effects Models Using lme4 , 2014, 1406.5823.

[19]  Michael W. Boyce,et al.  Situation Awareness-Based Agent Transparency , 2014 .

[20]  E. Navarro‐Pardo,et al.  Differences Between Young and Old University Students on a Lexical Decision Task: Evidence Through an Ex-Gaussian Approach , 2013, The Journal of general psychology.

[21]  Z. Dienes,et al.  Application of the ex-Gaussian function to the effect of the word blindness suggestion on Stroop task performance suggests no word blindness , 2013, Front. Psychol..

[22]  Xin Liu,et al.  Modeling Context Aware Dynamic Trust Using Hidden Markov Model , 2012, AAAI.

[23]  C. Hulme,et al.  Reaction Time Variability in Children With ADHD Symptoms and/or Dyslexia , 2012, Developmental neuropsychology.

[24]  Jian Rong,et al.  Driver's Visual Cognition Behaviors of Traffic Signs Based on Eye Movement Parameters , 2011 .

[25]  Athman Bouguettaya,et al.  Web Services Reputation Assessment Using a Hidden Markov Model , 2009, ICSOC/ServiceWave.

[26]  Vladimiro Sassone,et al.  HMM-Based Trust Model , 2009, Formal Aspects in Security and Trust.

[27]  Gilbert L. Peterson,et al.  A Trust-Based Multiagent System , 2009, 2009 International Conference on Computational Science and Engineering.

[28]  Martin Buss,et al.  An HMM approach to realistic haptic human-robot interaction , 2009, World Haptics 2009 - Third Joint EuroHaptics conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems.

[29]  R. Whelan Effective Analysis of Reaction Time Data , 2008 .

[30]  Christopher D. Wickens,et al.  Humans: Still Vital After All These Years of Automation , 2008, Hum. Factors.

[31]  Neil J. Mansfield,et al.  Evaluation of reaction time performance and subjective workload during whole-body vibration exposure while seated in upright and twisted postures with and without armrests , 2008 .

[32]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[33]  Katsuya Matsunaga,et al.  Differences of drivers' reaction times according to age and mental workload. , 2008, Accident; analysis and prevention.

[34]  Denis Cousineau,et al.  How to use MATLAB to fit the ex-Gaussian and other probability functions to a distribution of response times , 2008 .

[35]  Alex Bateman,et al.  An introduction to hidden Markov models. , 2007, Current protocols in bioinformatics.

[36]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[37]  Christopher D. Wickens,et al.  The benefits of imperfect diagnostic automation: a synthesis of the literature , 2007 .

[38]  Ji Gao,et al.  Extending the decision field theory to model operators' reliance on automation in supervisory control situations , 2006, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[39]  Alexander Felfernig,et al.  An Empirical Study on Consumer Behavior in the Interaction with Knowledge-based Recommender Applications , 2006, The 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services (CEC/EEE'06).

[40]  J. Swanson,et al.  Reaction Time Distribution Analysis of Neuropsychological Performance in an ADHD Sample , 2006, Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence.

[41]  Albert Kircher,et al.  Using mobile telephones: cognitive workload and attention resource allocation. , 2004, Accident; analysis and prevention.

[42]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[43]  Joelle Pineau,et al.  Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.

[44]  Cees J. H. Midden,et al.  The effects of errors on system trust, self-confidence, and the allocation of control in route planning , 2003, Int. J. Hum. Comput. Stud..

[45]  Allison M. Okamura,et al.  Recognition of operator motions for real-time assistance using virtual fixtures , 2003, 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings..

[46]  Dennis R. Wixon,et al.  CHI '02 Extended Abstracts on Human Factors in Computing Systems , 2002, CHI 2002.

[47]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[48]  Mark R. Lehto,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[49]  N. Moray,et al.  Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. , 2000, Journal of experimental psychology. Applied.

[50]  Toshiyuki Inagaki,et al.  Laboratory studies of trust between humans and machines in automated systems , 1999 .

[51]  D. Balota,et al.  Word frequency, repetition, and lexicality effects in word recognition tasks: beyond measures of central tendency. , 1999, Journal of experimental psychology. General.

[52]  Jean E. Fox,et al.  The effects of information accuracy on user trust and compliance , 1996, CHI Conference Companion.

[53]  N. Moray,et al.  Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. , 1996, Ergonomics.

[54]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[55]  Leslie Pack Kaelbling,et al.  Acting Optimally in Partially Observable Stochastic Domains , 1994, AAAI.

[56]  John Braithwaite,et al.  Trust and compliance , 1994 .

[57]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[58]  Robert W. Proctor,et al.  Human factors in simple and complex systems , 1993 .

[59]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[60]  John E. Deaton,et al.  Theory and Design of Adaptive Automation in Aviation Systems , 1992 .

[61]  D. Mewhort,et al.  Analysis of Response Time Distributions: An Example Using the Stroop Task , 1991 .

[62]  B. Rouse William,et al.  Adaptive Aiding for Human/Computer Control , 1988 .

[63]  Bonnie M. Muir,et al.  Trust Between Humans and Machines, and the Design of Decision Aids , 1987, Int. J. Man Mach. Stud..

[64]  R. Ratcliff,et al.  Retrieval Processes in Recognition Memory , 1976 .

[65]  R. Hohle INFERRED COMPONENTS OF REACTION TIMES AS FUNCTIONS OF FOREPERIOD DURATION. , 1965, Journal of experimental psychology.

[66]  W. J. McGill,et al.  The general-gamma distribution and reaction times☆ , 1965 .

[67]  Neera Jain,et al.  Improving Human-Machine Collaboration Through Transparency-based Feedback – Part II: Control Design and Synthesis , 2019, IFAC-PapersOnLine.

[68]  Neera Jain,et al.  Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I: Human Trust and Workload Model , 2019, IFAC-PapersOnLine.

[69]  Yue Wang,et al.  Human-Collaborative Schemes in the Motion Control of Single and Multiple Mobile Robots Mobile robot , 2017 .

[70]  Antonio Franchi,et al.  Human-Collaborative Schemes in the Motion Control of Single and Multiple Mobile Robots Mobile robot , 2017 .

[71]  Tove Helldin,et al.  Transparency for future semi-automated systems: effects of transparency on operator performance, workload and trust , 2014 .

[72]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[73]  Mark Hoogendoorn,et al.  Modelling biased human trust dynamics , 2013, Web Intell. Agent Syst..

[74]  Holly A. Yanco,et al.  Modeling trust to improve human-robot interaction , 2012 .

[75]  Groupe Pdmia Markov Decision Processes In Artificial Intelligence , 2009 .

[76]  Svein J. Knapskog,et al.  Learning Trust in Dynamic Multiagent Environments using HMMs. , 2008 .

[77]  David Crundall,et al.  PERIPHERAL DETECTION RATES IN DRIVERS , 1999 .

[78]  Raja Parasuraman,et al.  Trust in Decision Aids: a Model and Its Training Implications , 1998 .

[79]  Deborah A. Boehm-Davis,et al.  Effects of Age and Congestion Information Accuracy of Advanced Traveler Information Systems on User Trust and Compliance , 1998 .

[80]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[81]  R. Duncan Luce,et al.  Response Times: Their Role in Inferring Elementary Mental Organization , 1986 .

[82]  P. Jaśkowski Distribution of the human reaction time measurements. , 1983, Acta neurobiologiae experimentalis.