Do Smart Speakers Respond to Their Errors Properly? A Study on Human-Computer Dialogue Strategy

As smart speakers with voice interaction capability permeate continuously in the world, more and more people will gradually get used to the new interaction medium–voice. Although speech recognition, natural language processing (NLP) have been greatly improved over the past few years, users still may encounter errors from time to time like “cannot understand”, “no requested audio resource (such as music)”, which can frustrate users. Therefore, when an error message is reported, it is vital that the smart speaker gives an effective and proper response. However, currently the response strategies adopted by leading smart speaker brands in China differed mainly on two dimensions: “apology or not” and “humor or neutral”. We explored user’s preference of response strategies under two error scenarios——“cannot understand” and “no requested audio resource”. A 2 (apology: yes vs. no) × 2 (error message expression tone: humor vs. neutral) within-subjects experiment was conducted. Two dependent variables (satisfaction and perceived sincerity of response) were measured. The results showed that participants were more satisfied and perceived higher sincerity when smart speaker apologized in both error scenarios. In the “no requested audio resource” scenario, humor had no significant impact on the perception of satisfaction and sincerity. But in the “cannot understand” scenario, humorous expression decreased perceived sincerity.

[1]  Jeng-Yi Tzeng,et al.  Toward a more civilized design: studying the effects of computers that apologize , 2004, Int. J. Hum. Comput. Stud..

[2]  Kate S. Hone,et al.  Empathic agents to reduce user frustration: The effects of varying agent characteristics , 2006, Interact. Comput..

[3]  Pawel Dybala,et al.  Humoroids: conversational agents that induce positive emotions with humor , 2009, AAMAS.

[4]  Alexandre N. Tuch,et al.  Simple but Crucial User Interfaces in the World Wide Web: Introducing 20 Guidelines for Usable Web Form Design , 2010 .

[5]  L. Miguel Encarnação,et al.  The effectiveness of social agents in reducing user frustration , 2006, CHI EA '06.

[6]  David Konopnicki,et al.  I Understand Your Frustration , 2016, CSCW Companion.

[7]  Mitsuru Ishizuka,et al.  THE EMPATHIC COMPANION: A CHARACTER-BASED INTERFACE THAT ADDRESSES USERS' AFFECTIVE STATES , 2005, Appl. Artif. Intell..

[8]  Yang Gao,et al.  Alexa, My Love: Analyzing Reviews of Amazon Echo , 2018, 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).

[9]  Anton Nijholt,et al.  From Word Play to World Play: Introducing Humor in Human-Computer Interaction , 2018, ECCE.

[10]  Cade McCall,et al.  Does it matter if a computer jokes , 2011, CHI Extended Abstracts.

[11]  Michael Khoo,et al.  Do you care if a computer says sorry?: user experience design through affective messages , 2012, DIS '12.

[12]  Ko de Ruyter,et al.  The effect of humor in electronic service encounters , 2008 .

[13]  Kwan Min Lee,et al.  Social Responses to Conversational TV VUI: Apology and Voice , 2015, Int. J. Technol. Hum. Interact..

[14]  Beverly Park Woolf,et al.  Affect-aware tutors: recognising and responding to student affect , 2009, Int. J. Learn. Technol..

[15]  Jonathan Klein,et al.  This computer responds to user frustration: Theory, design, and results , 2002, Interact. Comput..

[16]  Irene Lopatovska,et al.  Personification of the Amazon Alexa: BFF or a Mindless Companion , 2018, CHIIR.

[17]  Mahir Akgun,et al.  The effect of apologetic error messages and mood states on computer users’ self-appraisal of performance , 2010 .

[18]  J. Bryant,et al.  Acquisition of information from educational television programs as a function of differently paced humorous inserts. , 1980, Journal of educational psychology.

[19]  Rafael E. Banchs,et al.  Humor Intelligence for Virtual Agents , 2018, IWSDS.

[20]  Samuel S. Monfort,et al.  Almost human: Anthropomorphism increases trust resilience in cognitive agents. , 2016, Journal of experimental psychology. Applied.

[21]  Klaus Opwis,et al.  Usable error message presentation in the World Wide Web: Do not show errors right away , 2007, Interact. Comput..

[22]  Sanghoon Park,et al.  The impact of frustration-mitigating messages delivered by an interface agent , 2005, AIED.

[23]  Aung Pyae,et al.  Investigating the usability and user experiences of voice user interface: a case of Google home smart speaker , 2018, MobileHCI Adjunct.

[24]  Haizhou Li,et al.  Making Social Robots More Attractive: The Effects of Voice Pitch, Humor and Empathy , 2013, Int. J. Soc. Robotics.

[25]  Jichen Zhu,et al.  Patterns for How Users Overcome Obstacles in Voice User Interfaces , 2018, CHI.

[26]  Elite Olshtain,et al.  Requests and Apologies: A Cross-Cultural Study of Speech Act Realization Patterns (CCSARP)1 , 1984 .

[27]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.

[28]  Clifford Nass,et al.  Effects of Humor in Task-Oriented Human-Computer Interaction and Computer-Mediated Communication: A Direct Test of SRCT Theory , 1999, Hum. Comput. Interact..

[29]  Shruti Sannon,et al.  "Alexa is my new BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo , 2017, CHI Extended Abstracts.

[30]  Sahana Murthy,et al.  Personalized Affective Feedback to Address Students’ Frustration in ITS , 2019, IEEE Transactions on Learning Technologies.

[31]  Jeng-Yi Tzeng,et al.  Matching users' diverse social scripts with resonating humanized features to create a polite interface , 2006, Int. J. Hum. Comput. Stud..

[32]  Anton Nijholt,et al.  Humor in Human-Computer Interaction: A Short Survey , 2017, INTERACT 2017.