How human judgment impairs automated deception detection performance

Background: Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from other domains suggest that hybrid human-machine integrations could offer a viable path in deception detection tasks. Method: We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n=1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful and deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). Results: The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to the chance level. The hybrid-adjust condition did not deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. Conclusion: The current study does not support the notion that humans can meaningfully add to the deception detection performance of a machine learning system.

[1]  Aldert Vrij,et al.  A cognitive approach to lie detection: A meta‐analysis , 2017 .

[2]  Maria Hartwig,et al.  Lie Detection from Multiple Cues: A Meta‐analysis , 2014 .

[3]  R. Bull,et al.  What really is effective in interviews with suspects? A study comparing interviewing skills against interviewing outcomes , 2010 .

[4]  James J. Lindsay,et al.  Cues to deception. , 2003, Psychological bulletin.

[5]  Ryan L. Boyd,et al.  The Development and Psychometric Properties of LIWC2015 , 2015 .

[6]  Massimo Poesio,et al.  Automatic deception detection in Italian court cases , 2013, Artificial Intelligence and Law.

[7]  Bruno Verschuere,et al.  Using Named Entities for Computer‐Automated Verbal Deception Detection , 2017, Journal of forensic sciences.

[8]  Ewout H. Meijer,et al.  Strong, but Wrong: Lay People’s and Police Officers’ Beliefs about Verbal and Nonverbal Cues to Deception , 2016, PloS one.

[9]  Verónica Pérez-Rosas,et al.  Cross-cultural Deception Detection , 2014, ACL.

[10]  Maria Hartwig,et al.  Credibility Assessment at Portals , 2014 .

[11]  Mohamed Abouelenien,et al.  Verbal and Nonverbal Clues for Real-life Deception Detection , 2015, EMNLP.

[12]  B. Depaulo,et al.  Accuracy of Deception Judgments , 2006, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[13]  Christian A. Meissner,et al.  Does Training Improve the Detection of Deception? A Meta-Analysis , 2016, Commun. Res..

[14]  Christopher G. Harris Comparing Human Computation, Machine, and Hybrid Methods for Detecting Hotel Review Spam , 2019, I3E.

[15]  D. Faust,et al.  The detection of deception. , 1995, Neurologic clinics.

[16]  B. van Ginneken,et al.  Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study. , 2020, The Lancet. Oncology.

[17]  B. Cutler,et al.  Coercive Interrogation of Eyewitnesses Can Produce False Accusations , 2016 .

[18]  Bennett Kleinberg,et al.  Being accurate about accuracy in verbal deception detection , 2019, PloS one.

[19]  Verónica Pérez-Rosas,et al.  Box of Lies: Multimodal Deception Detection in Dialogues , 2019, NAACL.

[20]  Claire Cardie,et al.  Negative Deceptive Opinion Spam , 2013, NAACL.

[21]  S. L. Sporer,et al.  Can Credibility Criteria Be Assessed Reliably? A Meta-Analysis of Criteria-Based Content Analysis , 2017, Psychological assessment.

[22]  J. Rosenfeld Detecting Concealed Information and Deception: Recent Developments , 2018 .

[23]  S. Kassin,et al.  Interviewing suspects: Practice, science, and future directions , 2010 .

[24]  Eric Gilbert,et al.  Human-Machine Collaboration for Content Regulation , 2019, ACM Trans. Comput. Hum. Interact..

[25]  Bruno Verschuere,et al.  Automated verbal credibility assessment of intentions: The model statement technique and predictive modeling , 2018, Applied cognitive psychology.

[26]  Aldert Vrij,et al.  Eliciting cues to deception and truth: What matters are the questions asked , 2012 .

[27]  Mohamed Abouelenien,et al.  Identity Deception Detection , 2017, IJCNLP.

[28]  T. Yarkoni,et al.  Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning , 2017, Perspectives on psychological science : a journal of the Association for Psychological Science.

[29]  Carlo Strapparava,et al.  The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language , 2009, ACL.

[30]  J. Yuille,et al.  The ABC’s of CBCA: Verbal Credibility Assessment in Practice , 2013 .

[31]  José Camacho-Collados,et al.  Applying automatic text-based detection of deceptive language to police reports: Extracting behavioral patterns from a multi-step classification model to understand how we lie to the police , 2018, Knowl. Based Syst..

[32]  Macdonald Stuart Shedding Light on Terrorist and Extremist Content Removal , 2019 .

[33]  José M. F. Moura,et al.  Building Human-Machine Trust via Interpretability , 2019, AAAI.

[34]  Claire Cardie,et al.  Finding Deceptive Opinion Spam by Any Stretch of the Imagination , 2011, ACL.

[35]  Rainer Banse,et al.  Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis. , 2016, Law and human behavior.

[36]  C. F. Bond,et al.  Why do lie-catchers fail? A lens model meta-analysis of human lie judgments. , 2011, Psychological bulletin.

[37]  T. Levine Truth-Default Theory (TDT) , 2014 .

[38]  Bennett Kleinberg,et al.  Detecting Deceptive Intentions: Possibilities for Large-Scale Applications , 2019, The Palgrave Handbook of Deceptive Communication.