Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality

This study aims to explore the feasibility of a text-based virtual agent as a new survey method to overcome the web survey's common response quality problems, which are caused by respondents' inattention. To this end, we conducted a 2 (platform: web vs. chatbot) × 2 (conversational style: formal vs. casual) experiment. We used satisficing theory to compare the responses' data quality. We found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chatbot survey thus resulted in higher-quality data. Moreover, when a casual conversational style is used, the participants were less likely to satisfice-although such effects were only found in the chatbot condition. These results imply that conversational interactivity occurs when a chat interface is accompanied by messages with effective tone. Based on an analysis of the qualitative responses, we also showed that a chatbot could perform part of a human interviewer's role by applying effective communication strategies.

[1]  Aaron Sedley,et al.  Survey Research in HCI , 2014, Ways of Knowing in HCI.

[2]  Melanie C. Green,et al.  Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias , 2003 .

[3]  Frederick G. Conrad,et al.  Comprehension and engagement in survey interviews with virtual agents , 2015, Front. Psychol..

[4]  Keri K. Stephens,et al.  R U Able to Meat Me: The Impact of Students’ Overly Casual Email Messages to Instructors , 2009 .

[5]  Scott Fricker,et al.  An Experimental Comparison of Web and Telephone Surveys , 2005 .

[6]  Frederick G. Conrad,et al.  Why do survey respondents disclose more when computers ask the questions , 2013 .

[7]  Gloria Mark,et al.  Confiding in and Listening to Virtual Agents: The Effect of Personality , 2017, IUI.

[8]  L. Shrum,et al.  The measurement of personal values in survey research: a test of alternative rating procedures. , 2000, Public opinion quarterly.

[9]  Roger Tourangeau,et al.  Humanizing self-administered surveys: experiments on social presence in web and IVR surveys , 2003, Comput. Hum. Behav..

[10]  Nicole C. Krämer,et al.  Quid Pro Quo? Reciprocal Self-disclosure and Communicative Accomodation towards a Virtual Interviewer , 2011, IVA.

[11]  Yasaman Khazaeni,et al.  All Work and No Play? , 2018, CHI.

[12]  Rachel K. E. Bellamy,et al.  At Face Value , 2021, Bigger Than Life.

[13]  Philip A. Thompsen,et al.  Effects of Pictographs and Quoting on Flaming in Electronic Mail. , 1996 .

[14]  Shwetak N. Patel,et al.  Convey: Exploring the Use of a Context View for Chatbots , 2018, CHI.

[15]  G. Malhi Face value. , 2008, Acta neuropsychiatrica.

[16]  K. Sheehan Online Research Methodology* , 2002 .

[17]  Frank Biocca,et al.  The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments , 2003, Presence: Teleoperators & Virtual Environments.

[18]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[19]  S. Shyam Sundar,et al.  Theoretical Importance of Contingency in Human-Computer Interaction , 2016, Commun. Res..

[20]  Jennifer Stromer-Galley On-Line Interaction and Why Candidates Avoid It , 2000 .

[21]  H. Simon,et al.  Models Of Man : Social And Rational , 1957 .

[22]  Susan E. Brennan,et al.  The Grounding Problem in Conversations With and Through Computers , 2000 .

[23]  Victor Zue,et al.  Conversational interfaces: advances and challenges , 1997, Proceedings of the IEEE.

[24]  T. Kowatsch,et al.  Text-based Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a Randomized Controlled Trial on Childhood Obesity , 2017 .

[25]  Justine Cassell,et al.  Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents , 2003, User Modeling and User-Adapted Interaction.

[26]  J. Krosnick Response strategies for coping with the cognitive demands of attitude measures in surveys , 1991 .

[27]  J. D. Davis,et al.  The effects of interviewer style in a standardized interview. , 1966, Journal of consulting psychology.

[28]  F. Conrad,et al.  Clarifying question meaning in a household telephone survey. , 2000, Public opinion quarterly.

[29]  Kevin B. Wright,et al.  Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services , 2006, J. Comput. Mediat. Commun..

[30]  Aaron Sedley,et al.  Designing Surveys for HCI Research , 2015, CHI Extended Abstracts.

[31]  J. Krosnick,et al.  Survey research. , 1999, Annual review of psychology.

[32]  K. Fitzpatrick,et al.  Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial , 2017, JMIR mental health.

[33]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[34]  Gerald Albaum,et al.  A Comparison of Response Characteristics from Web and Telephone Surveys , 2004 .

[35]  David Cameron,et al.  Towards a chatbot for digital counselling , 2017, BCS HCI.

[36]  Anbang Xu,et al.  A New Chatbot for Customer Service on Social Media , 2017, CHI.

[37]  Sara Kiesler,et al.  Social psychological aspects of computer-mediated communication , 1984 .

[38]  Landra L. Rezabek,et al.  Visual Cues in Computer-Mediated Communication: Supplementing Text with Emoticons , 1998 .

[39]  Claudio S. Pinhanez,et al.  Typefaces and the Perception of Humanness in Natural Language Chatbots , 2017, CHI.

[40]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.

[41]  Charles F. Cannell,et al.  Effects of Interviewer Style on Quality of Reporting in a Survey Interview , 1976 .

[42]  Timothy P. Johnson,et al.  An Evaluation of the Effects of Interviewer Characteristics in an RDD Telephone Survey of Drug Use , 2000 .

[43]  Yasaman Khazaeni,et al.  All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild , 2018, CHI 2018.

[44]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[45]  Ann Colley,et al.  Style and Content in E-Mails and Letters to Male and Female Friends , 2004 .

[46]  Jiebo Luo,et al.  Touch Your Heart: A Tone-aware Chatbot for Customer Care on Social Media , 2018, CHI.

[47]  Chris Speed,et al.  The Ethnobot: Gathering Ethnographies in the Age of IoT , 2018, CHI.

[48]  Geert Loosveldt,et al.  Face-to-Face versus Web Surveying in a High-Internet-Coverage Population Differences in Response Quality , 2008 .

[49]  M. Schober,et al.  Does conversational interviewing reduce survey measurement error?. Public opinion quarterly, . , 1997 .

[50]  Louis-Philippe Morency,et al.  It's only a computer: Virtual humans increase willingness to disclose , 2014, Comput. Hum. Behav..

[51]  Frederick G. Conrad,et al.  Does Conversational Interviewing Reduce Survey Measurement Error , 1997 .

[52]  Sungwoo Lee,et al.  I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence , 2018, CHI.

[53]  Matthew C Keller,et al.  Mental Exercising Through Simple Socializing: Social Interaction Promotes General Cognitive Functioning , 2008, Personality & social psychology bulletin.