Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs

Unfortunately, there is little hope that information-providing systems will ever be perfectly reliable. The results of some studies have indicated that imperfect systems can reduce the users' cognitive load in interacting with them by expressing their level of confidence to users. Artificial subtle expressions (ASEs), which are machine-like artificial sounds to express the confidence information to users added just after the system's suggestions, were keenly focused on because of their simplicity and efficiency. The purpose of the work reported here was to develop a preliminary design guideline for ASEs in order to determine the expandability of ASEs. We believe that augmenting the expressivity of ASEs would lead reducing the users' cognitive load for processing the information provided from the systems, and this would also lead augmenting users' various cognitive capacities. Our experimental results showed that ASEs with decreasing pitch conveyed a low confidence level to users. This result were used to formulate a concrete design guideline for ASEs.

[1]  Chris Harrison,et al.  Unlocking the expressivity of point lights , 2012, CHI.

[2]  Seiji Yamada,et al.  Proposing Artificial Subtle Expressions as an Intuitive Notification Methodology for Artificial Agents' Internal States , 2010 .

[3]  Khalil Sima'an,et al.  Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship , 2006, Computational Linguistics.

[4]  Atsunori Ogawa,et al.  Joint estimation of confidence and error causes in speech recognition , 2012, Speech Commun..

[5]  Chris Harrison,et al.  Kineticons: using iconographic motion in graphical user interface design , 2011, CHI.

[6]  Mark Sanderson,et al.  Information retrieval system evaluation: effort, sensitivity, and reliability , 2005, SIGIR '05.

[7]  Meera M. Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles (Abstract only) , 1989, SGCH.

[8]  Estelle Campione,et al.  A large-scale multilingual study of silent pause duration , 2002, Speech Prosody 2002.

[9]  Steve Krug,et al.  Don't Make Me Think , 2000 .

[10]  Stephen A. Brewster,et al.  Using nonspeech sounds to provide navigation cues , 1998, TCHI.

[11]  Meera Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles , 1989, Hum. Comput. Interact..

[12]  H. Cai,et al.  Tuning Trust Using Cognitive Cues for Better Human-Machine Collaboration , 2010 .

[13]  Alfred Mertins,et al.  Automatic speech recognition and speech variability: A review , 2007, Speech Commun..

[14]  Mark Liberman,et al.  Towards an integrated understanding of speaking rate in conversation , 2006, INTERSPEECH.

[15]  Seiji Yamada,et al.  Effects of different types of artifacts on interpretations of artificial subtle expressions (ASEs) , 2011, CHI EA '11.

[16]  Eric Horvitz,et al.  Display of Information for Time-Critical Decision Making , 1995, UAI.

[17]  Andrew Sears,et al.  Using confidence scores to improve hands-free speech based navigation in continuous dictation systems , 2004, TCHI.

[18]  Stephen Brewster,et al.  Experimentally Derived Guidelines for the Creation of Earcons , 2001 .

[19]  William W. Gaver The SonicFinder: An Interface That Uses Auditory Icons , 1989, Hum. Comput. Interact..

[20]  J. Keller Development and use of the ARCS model of instructional design , 1987 .

[21]  Judy Edworthy,et al.  Learning auditory warnings: The effects of sound type, verbal labelling and imagery on the identification of alarm sounds , 1999 .

[22]  Bruce N. Walker,et al.  Mappings and metaphors in auditory displays: An experimental assessment , 2005, TAP.

[23]  Katsuhito Sudoh,et al.  Incorporating discourse features into confidence scoring of intention recognition results in spoken dialogue systems , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[24]  Jennifer Balogh,et al.  Voice User Interface Design , 2004 .

[25]  Stuart E. Middleton,et al.  Capturing knowledge of user preferences: ontologies in recommender systems , 2001, K-CAP '01.

[26]  Eric Horvitz,et al.  Principles of mixed-initiative user interfaces , 1999, CHI '99.

[27]  W. Keith Edwards,et al.  Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..

[28]  William W. Gaver Auditory Icons: Using Sound in Computer Interfaces , 1986, Hum. Comput. Interact..

[29]  Seiji Yamada,et al.  How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression , 2012, CogSci.

[30]  Bernt Schiele,et al.  Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.