BigBlueBot: teaching strategies for successful human-agent interactions

Chatbots are becoming quite popular, with many brands developing conversational experiences using platforms such as IBM's Watson Assistant and Facebook Messenger. However, previous research reveals that users' expectations of what conversational agents can understand and do far outpace their actual technical capabilities. Our work seeks to bridge the gap between these expectations and reality by designing a fun learning experience with several goals: explaining how chatbots work by mapping utterances to a set of intents, teaching strategies for avoiding conversational breakdowns, and increasing desire to use chatbots by creating feelings of empathy toward them. Our experience, called BigBlueBot, consists of interactions with two chatbots in which breakdowns occur and the user (or chatbot) must recover using one or more repair strategies. In a Mechanical Turk evaluation (N=88), participants learned strategies for having successful human-agent interactions, reported feelings of empathy toward the chatbots, and expressed a desire to interact with chatbots in the future.

[1]  Shwetak N. Patel,et al.  Evaluating and Informing the Design of Chatbots , 2018, Conference on Designing Interactive Systems.

[2]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[3]  Alexander I. Rudnicky,et al.  Ravenclaw: dialog management using hierarchical task decomposition and an expectation agenda , 2003, INTERSPEECH.

[4]  Shwetak N. Patel,et al.  FarmChat , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[5]  N. Sadat Shami,et al.  What Can You Do?: Studying Social-Agent Orientation and Agent Proactive Interactions with an Agent for Employees , 2016, Conference on Designing Interactive Systems.

[6]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[7]  David Vandyke,et al.  Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems , 2015, EMNLP.

[8]  Mei-Ling Chen,et al.  How Personal Experience and Technical Knowledge Affect Using Conversational Agents , 2018, IUI Companion.

[9]  Joseph D. Novak,et al.  Meaningful learning: The essential factor for conceptual change in limited or inappropriate propositional hierarchies leading to empowerment of learners , 2002 .

[10]  William M. Smith,et al.  A Study of Thinking , 1956 .

[11]  Gabriel Skantze Exploring Human Error Handling Strategies : Implications for Spoken Dialogue Systems , 2003 .

[12]  David Konopnicki,et al.  Detecting Egregious Conversations between Customers and Virtual Agents , 2017, NAACL.

[13]  Jodi Forlizzi,et al.  "Hey Alexa, What's Up?": A Mixed-Methods Studies of In-Home Conversational Agent Usage , 2018, Conference on Designing Interactive Systems.

[14]  Joseph Weizenbaum,et al.  ELIZA—a computer program for the study of natural language communication between man and machine , 1966, CACM.

[15]  P. Scott,et al.  Student Conceptions and Conceptual Learning in Science , 2013 .

[16]  Yasaman Khazaeni,et al.  All Work and No Play? , 2018, CHI.

[17]  Imed Zitouni,et al.  Automatic Online Evaluation of Intelligent Assistants , 2015, WWW.

[18]  Ben Shneiderman,et al.  Grand challenges for HCI researchers , 2016, Interactions.

[19]  Zahra Ashktorab,et al.  Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns , 2019, CHI.

[20]  Herbert H. Clark,et al.  Grounding in communication , 1991, Perspectives on socially shared cognition.

[21]  Steve Young A review of large-vocabulary continuous-speech , 1996 .

[22]  Jacki O'Neill,et al.  How Do You Want Your Chatbot? An Exploratory Wizard-of-Oz Study with Young, Urban Indians , 2017, INTERACT.

[23]  Dani Yaniv,et al.  Dynamics of Creativity and Empathy in Role Reversal: Contributions from Neuroscience , 2012 .

[24]  Anbang Xu,et al.  A New Chatbot for Customer Service on Social Media , 2017, CHI.

[25]  J. Bruner,et al.  A study of thinking , 1956 .

[26]  James Everett Young,et al.  Poor Thing! Would You Feel Sorry for a Simulated Robot? A comparison of empathy toward a physical and a simulated robot , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[27]  Ana Paiva,et al.  Empathy in Virtual Agents and Robots , 2017, ACM Trans. Interact. Intell. Syst..

[28]  Gabriel Skantze,et al.  Exploring human error recovery strategies: Implications for spoken dialogue systems , 2005, Speech Communication.

[29]  Sarah Sharples,et al.  Voice Interfaces in Everyday Life , 2018, CHI.

[30]  J. C. R. Licklider,et al.  Man-Computer Symbiosis , 1960 .

[31]  Yasaman Khazaeni,et al.  All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild , 2018, CHI 2018.

[32]  Carol J. Williams,et al.  Experiential learning: Past and present , 1994 .

[33]  Joris H. Janssen A three-component framework for empathic technologies to augment human interaction , 2012, Journal on Multimodal User Interfaces.

[34]  D. Kolb Experiential Learning: Experience as the Source of Learning and Development , 1983 .

[35]  R. Kennedy,et al.  Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .

[36]  Alexander I. Rudnicky,et al.  Universal speech interfaces , 2001, INTR.

[37]  Sonya S. Kwak,et al.  What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot , 2013, 2013 IEEE RO-MAN.

[38]  Helvi Kyngäs,et al.  The qualitative content analysis process. , 2008, Journal of advanced nursing.

[39]  Mark W. Newman,et al.  Learning from a learning thermostat: lessons for intelligent systems for the home , 2013, UbiComp.

[40]  Raquel Oliveira Prates,et al.  Here's What I Can Do: Chatbots' Strategies to Convey Their Features to Users , 2017, IHC.

[41]  Shwetak N. Patel,et al.  Convey: Exploring the Use of a Context View for Chatbots , 2018, CHI.

[42]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.