Response Times when Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds

Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. In this paper, we focus on the cognitive loads of users in interpreting ASEs in this study. Specifically, we assume that a shorter response time indicates less cognitive load, and we hypothesize that users will show a shorter response time when interpreting ASEs compared with speech sounds. We succeeded in verifying our hypothesis in a web-based investigation done to comprehend participants' cognitive loads by measuring their response times in interpreting ASEs and speeches.

[1]  R. Calvo,et al.  Classification of Cognitive Load from Task Performance & Multichannel Physiology during Affective Changes , 2011 .

[2]  D. Kahneman,et al.  Attention and Effort , 1973 .

[3]  Andrew Sears,et al.  Using confidence scores to improve hands-free speech based navigation in continuous dictation systems , 2004, TCHI.

[4]  Michael N. Katehakis,et al.  The Multi-Armed Bandit Problem: Decomposition and Computation , 1987, Math. Oper. Res..

[5]  Atsunori Ogawa,et al.  Joint estimation of confidence and error causes in speech recognition , 2012, Speech Commun..

[6]  Eric Horvitz,et al.  Display of Information for Time-Critical Decision Making , 1995, UAI.

[7]  Katharina Reinecke,et al.  Crowdsourcing performance evaluations of user interfaces , 2013, CHI.

[8]  J. Beatty Task-evoked pupillary responses, processing load, and the structure of processing resources. , 1982, Psychological bulletin.

[9]  Bing Liu,et al.  Joint Online Spoken Language Understanding and Language Modeling With Recurrent Neural Networks , 2016, SIGDIAL Conference.

[10]  Daniel Gopher,et al.  On the Economy of the Human Processing System: A Model of Multiple Capacity. , 1977 .

[11]  Meera Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles , 1989, Hum. Comput. Interact..

[12]  Seiji Yamada,et al.  Investigating Ways of Interpretations of Artificial Subtle Expressions Among Different Languages: A Case of Comparison Among Japanese, German, Portuguese and Mandarin Chinese , 2015, CogSci.

[13]  Eric Horvitz,et al.  Principles of mixed-initiative user interfaces , 1999, CHI '99.

[14]  W. Keith Edwards,et al.  Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..

[15]  Sang D. Choi,et al.  An investigation of myocardial aerobic capacity as a measure of both physical and cognitive workloads , 2005 .

[16]  Seiji Yamada,et al.  Artificial subtle expressions: intuitive notification methodology of artifacts , 2010, CHI.

[17]  Alfred Mertins,et al.  Automatic speech recognition and speech variability: A review , 2007, Speech Commun..

[18]  Seiji Yamada,et al.  How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression , 2012, CogSci.

[19]  John Sweller,et al.  Cognitive Load During Problem Solving: Effects on Learning , 1988, Cogn. Sci..

[20]  Meera M. Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles (Abstract only) , 1989, SGCH.

[21]  Roman Vilimek,et al.  Effects of Speech and Non-Speech Sounds on Short-Term Memory and Possible Implications for In-Vehicle Use Research paper for the ICAD05 workshop "Combining Speech and Sound in the User Interface" , 2005 .

[22]  Geoffrey Zweig,et al.  Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[23]  Bernt Schiele,et al.  Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.

[24]  Jan Noyes,et al.  Speech warnings: a review , 2006 .

[25]  Fred G. W. C. Paas,et al.  The Efficiency of Instructional Conditions: An Approach to Combine Mental Effort and Performance Measures , 1992 .

[26]  H. Cai,et al.  Tuning Trust Using Cognitive Cues for Better Human-Machine Collaboration , 2010 .