Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm’s degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

[1]  John R. Fedota,et al.  Neuroergonomics and human error , 2010 .

[2]  Raja Parasuraman,et al.  Neuroergonomics , 2011 .

[3]  Rebecca J. Compton,et al.  Perceived similarity and neural mirroring: Evidence from vicarious error processing , 2009, Social neuroscience.

[4]  Klaus Gramann,et al.  Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity , 2016, Proceedings of the National Academy of Sciences.

[5]  Deborah Lee,et al.  Are Well-Calibrated Users Effective Users? Associations Between Calibration of Trust and Performance on an Automation-Aided Task , 2015, Hum. Factors.

[6]  S. Raudenbush,et al.  Application of Hierarchical Linear Models to Assessing Change , 1987 .

[7]  K Parodi,et al.  Ion beam tracking using ultrasound motion detection. , 2014, Medical physics.

[8]  A. Delorme,et al.  Performance Monitoring Applied to System Supervision , 2017, Front. Hum. Neurosci..

[9]  Joseph DelPreto,et al.  Correcting robot mistakes in real time using EEG signals , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Mark W. Scerbo,et al.  Effects of a Psychophysiological System for Adaptive Automation on Performance, Workload, and the Event-Related Potential P300 Component , 2003, Hum. Factors.

[11]  Christopher D. Wickens,et al.  Display Signaling in Augmented Reality: Effects of Cue Reliability and Image Realism on Attention Allocation and Trust Calibration , 2001, Hum. Factors.

[12]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[13]  Manfred Tscheligi,et al.  To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot , 2017, Front. Robot. AI.

[14]  Mithra Vankipuram,et al.  Collaborative virtual reality based advanced cardiac life support training simulator using virtual reality principles , 2014, J. Biomed. Informatics.

[15]  S. Quartz,et al.  Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange , 2005, Science.

[16]  R. Parasuraman,et al.  An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents , 2017, Social neuroscience.

[17]  R. Parasuraman,et al.  Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study , 2016, Front. Hum. Neurosci..

[18]  H. Bekkering,et al.  Modulation of activity in medial frontal and motor cortices during error observation , 2004, Nature Neuroscience.

[19]  N. Yeung,et al.  Decision Processes in Human Performance Monitoring , 2010, The Journal of Neuroscience.

[20]  J. J. Dijkstra,et al.  On the use of computerised decision aids , 2006 .

[21]  Markus Ullsperger,et al.  Surprise and Error: Common Neuronal Architecture for the Processing of Errors and Novelty , 2012, The Journal of Neuroscience.

[22]  G. Metta,et al.  Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social , 2017, Front. Psychol..

[23]  Francis T. Durso,et al.  Individual Differences in the Calibration of Trust in Automation , 2015, Hum. Factors.

[24]  Douglas A. Wiegmann,et al.  Effects of Information Source, Pedigree, and Reliability on Operator Interaction With Decision Support Systems , 2007, Hum. Factors.

[25]  Klaus-Robert Müller,et al.  On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP , 2015, 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[26]  E. Wiese,et al.  You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction , 2017, Front. Psychol..

[27]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[28]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[29]  Douglas A. Wiegmann,et al.  Cognitive Anchoring on Self-Generated Decisions Reduces Operator Reliance on Automated Diagnostic Aids , 2005, Hum. Factors.

[30]  Leonie Koban,et al.  Brain systems underlying the affective and social monitoring of actions: An integrative review , 2014, Neuroscience & Biobehavioral Reviews.

[31]  M. Ullsperger,et al.  Neurophysiology of performance monitoring and adaptive behavior. , 2014, Physiological reviews.

[32]  F. Binkofski,et al.  Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI , 2008, PloS one.

[33]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[34]  Adrian G. Fischer,et al.  Neural mechanisms and temporal dynamics of performance monitoring , 2014, Trends in Cognitive Sciences.

[35]  J. Hohnsbein,et al.  Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks. , 1991, Electroencephalography and clinical neurophysiology.

[36]  Michael Falkenstein,et al.  Errors, Conflicts, and the Brain , 2004 .

[37]  Christian Kothe,et al.  Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general , 2011, Journal of neural engineering.

[38]  Bruno Berberian,et al.  The out-of-the-loop Brain: A neuroergonomic approach of the human automation interaction , 2017, Annu. Rev. Control..

[39]  Raja Parasuraman,et al.  Effects of Imperfect Automation on Decision Making in a Simulated Command and Control Task , 2007, Hum. Factors.

[40]  Raja Parasuraman,et al.  Adaptive Aiding of Human-Robot Teaming , 2011 .

[41]  Paul R. Havig,et al.  Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent , 2014, HCI.

[42]  Jan R Wessel,et al.  An adaptive orienting theory of error processing. , 2018, Psychophysiology.

[43]  Douglas A. Wiegmann,et al.  Automation Failures on Tasks Easily Performed by Operators Undermines Trust in Automated Aids , 2003 .

[44]  Samuel S. Monfort,et al.  Almost human: Anthropomorphism increases trust resilience in cognitive agents. , 2016, Journal of experimental psychology. Applied.

[45]  C. Eriksen,et al.  Effects of noise letters upon the identification of a target letter in a nonsearch task , 1974 .

[46]  Mark W. Scerbo,et al.  Theoretical Perspectives on Adaptive Automation , 2019, Human Performance in Automated and Autonomous Systems.

[47]  R. Parasuraman,et al.  Psychophysiology and adaptive automation , 1996, Biological Psychology.

[48]  D. Meyer,et al.  A Neural System for Error Detection and Compensation , 1993 .

[49]  L J Skitka,et al.  Automation bias: decision making and performance in high-tech cockpits. , 1997, The International journal of aviation psychology.

[50]  Stephen Rice Examining Single- and Multiple-Process Theories of Trust in Automation , 2009, The Journal of general psychology.

[51]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[52]  Marcel Brass,et al.  How social is error observation? The neural mechanisms underlying the observation of human and machine errors. , 2014, Social cognitive and affective neuroscience.

[53]  M. Delgado,et al.  Perceptions of moral character modulate the neural systems of reward during the trust game , 2005, Nature Neuroscience.

[54]  Raja Parasuraman,et al.  Neuroergonomics: Research and practice , 2003 .

[55]  Ralph Adolphs,et al.  Trust in the brain , 2002, Nature Neuroscience.

[56]  Ricardo Chavarriaga,et al.  Errare machinale est: the use of error-related potentials in brain-machine interfaces , 2014, Front. Neurosci..

[57]  Linda G. Pierce,et al.  The Perceived Utility of Human and Automated Aids in a Visual Detection Task , 2002, Hum. Factors.

[58]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[59]  D. Wiegmann,et al.  Similarities and differences between human–human and human–automation trust: an integrative review , 2007 .

[60]  N. Moray,et al.  Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. , 2000, Journal of experimental psychology. Applied.

[61]  Stephen H. Fairclough,et al.  Editorial: Trends in Neuroergonomics , 2017, Front. Hum. Neurosci..

[62]  Deborah Lee,et al.  I Trust It, but I Don’t Know Why , 2013, Hum. Factors.

[63]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[64]  Deborah Lee,et al.  Measuring Individual Differences in the Perfect Automation Schema , 2015, Hum. Factors.

[65]  Mark W. Scerbo,et al.  A brain-based system for adaptive automation , 2003 .

[66]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[67]  N. Kriegeskorte,et al.  Neural correlates of trust , 2007, Proceedings of the National Academy of Sciences.

[68]  Thorsten O. Zander,et al.  Enhancing Human-Computer Interaction with Input from Active and Passive Brain-Computer Interfaces , 2010, Brain-Computer Interfaces.

[69]  Arnaud Delorme,et al.  EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis , 2004, Journal of Neuroscience Methods.

[70]  Jessie Y. C. Chen,et al.  Situation awareness-based agent transparency for human-autonomy teaming effectiveness , 2017, Defense + Security.

[71]  Sebastian Grissmann,et al.  Context Sensitivity of EEG-Based Workload Classification Under Different Affective Valence , 2020, IEEE Transactions on Affective Computing.

[72]  Markus Ullsperger,et al.  Comparing the error-related negativity across groups: The impact of error- and trial-number differences. , 2017, Psychophysiology.

[73]  Amar R. Marathe,et al.  From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction , 2016, Front. Hum. Neurosci..

[74]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[75]  R Chavarriaga,et al.  Learning From EEG Error-Related Potentials in Noninvasive Brain-Computer Interfaces , 2010, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[76]  D. Woods,et al.  Automation Surprises , 2001 .