Investigating Expectations for Voice-based and Conversational Argument Search on the Web

Millions of arguments are shared on the web. Future information systems will be able to exploit this valuable knowledge source and to retrieve arguments relevant and convincing to our specific need---all with an interface as intuitive as asking your friend "Why ...". Although recent advancements in argument mining, conversational search, and voice recognition have put such systems within reach, many questions remain open, especially on the interface side. In this regard the paper at hand presents the first study of argument search behavior. We conduct an online-survey and a focused user study, putting emphasis on what people expect argument search to be like, rather than on what current first-generation systems provide. Our participants expected to use voice-based argument search mostly at home, but also together with others. Moreover, they expect such search systems to provide rich information on retrieved arguments, such as the source, supporting evidence, and background knowledge on entities or events mentioned. In observed interactions with a simulated system we found that the participants adapted their search behavior to different types of tasks, and that up-front categorization of the retrieved arguments is perceived as helpful if this is short. Our findings are directly applicable to the design of argument search systems, not only voice-based ones.

[1]  Catherine L. Smith,et al.  Knowledge-Context in Search Systems: Toward Information-Literate Actions , 2019, CHIIR.

[2]  William R. Hobbs,et al.  Combating fake news: an agenda for research and action , 2017 .

[3]  Iryna Gurevych,et al.  ArgumenText: Searching for Arguments in Heterogeneous Sources , 2018, NAACL.

[4]  Iyad Rahwan,et al.  Laying the foundations for a World Wide Argument Web , 2007, Artif. Intell..

[5]  Chris Reed,et al.  Argumentation Schemes , 2008 .

[6]  Benno Stein,et al.  Building an Argument Search Engine for the Web , 2017, ArgMining@EMNLP.

[7]  Stephanie Seneff,et al.  Spoken Dialogue Systems , 2008 .

[8]  Chris Reed,et al.  Argumentation Schemes , 2008 .

[9]  Matthias Hagen,et al.  Toward Voice Query Clarification , 2018, SIGIR.

[10]  Sarah Sharples,et al.  Voice Interfaces in Everyday Life , 2018, CHI.

[11]  Floris Bex,et al.  Implementing the argument web , 2013, Commun. ACM.

[12]  Ian Ruthven,et al.  Making Meaning: A Focus for Information Interactions Research , 2019, CHIIR.

[13]  William Yang Wang,et al.  MojiTalk: Generating Emotional Responses at Scale , 2017, ACL.

[14]  Mark Sanderson,et al.  How Do People Interact in Conversational Speech-Only Search Tasks: A Preliminary Analysis , 2017, CHIIR.

[15]  Filip Radlinski,et al.  A Theoretical Framework for Conversational Search , 2017, CHIIR.

[16]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[17]  Frank Bentley,et al.  Panel: Voice Assistants, UX Design and Research , 2018, CHI Extended Abstracts.

[18]  Charles L. A. Clarke,et al.  Exploring Conversational Search With Humans, Assistants, and Wizards , 2017, CHI Extended Abstracts.

[19]  DiazFernando,et al.  Research Frontiers in Information Retrieval , 2018 .

[20]  Emine Yilmaz,et al.  Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) , 2018 .

[21]  Adam Fourney,et al.  Exploring the Role of Conversational Cues in Guided Task Support with Virtual Assistants , 2018, CHI.

[22]  Matthias Hagen,et al.  Argument Search: Assessing Argument Relevance , 2019, SIGIR.

[23]  Jodi Forlizzi,et al.  "Hey Alexa, What's Up?": A Mixed-Methods Studies of In-Home Conversational Agent Usage , 2018, Conference on Designing Interactive Systems.

[24]  Noam Slonim,et al.  Towards an argumentative content search engine using weak supervision , 2018, COLING.

[25]  M. de Rijke,et al.  Evaluating Personal Assistants on Mobile devices , 2017, ArXiv.

[26]  Paul N. Bennett,et al.  Guidelines for Human-AI Interaction , 2019, CHI.

[27]  Nicholas J. Belkin,et al.  Cases, scripts, and information-seeking strategies: On the design of interactive information retrieval systems , 1995 .

[28]  O WobbrockJacob,et al.  Comparing Speech and Keyboard Text Entry for Short Messages in Two Languages on Touchscreen Phones , 2018 .

[29]  Ryen W. White,et al.  Analyzing and Predicting Task Reminders , 2016, UMAP.

[30]  James A. Landay,et al.  Comparing Speech and Keyboard Text Entry for Short Messages in Two Languages on Touchscreen Phones , 2016, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[31]  Adam J. Berinsky,et al.  Rumors and Health Care Reform: Experiments in Political Misinformation , 2015, British Journal of Political Science.

[32]  Jason Weston,et al.  Personalizing Dialogue Agents: I have a dog, do you have pets too? , 2018, ACL.

[33]  Martin Halvey,et al.  Conceptualizing agent-human interactions during the conversational search process , 2018 .

[34]  Martin Halvey,et al.  Evaluating the Social Acceptability of Voice Based Smartwatch Search , 2016, AIRS.

[35]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.

[36]  Imed Zitouni,et al.  Understanding User Satisfaction with Intelligent Assistants , 2016, CHIIR.

[37]  M. de Rijke,et al.  QRFA: A Data-Driven Model of Information-Seeking Dialogues , 2018, ECIR.

[38]  Ryen W. White,et al.  Learning About Work Tasks to Inform Intelligent Assistant Design , 2019, CHIIR.

[39]  Fernando Diaz,et al.  Research Frontiers in Information Retrieval: Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) , 2018, SIGF.

[40]  Serena Villata,et al.  Five Years of Argument Mining: a Data-driven Analysis , 2018, IJCAI.