Providing Direct Answers in Search Results: A Study of User Behavior

To study the impact of providing direct answers in search results on user behavior, we conducted a controlled user study to analyze factors including reading time, eye-tracked attention, and the influence of the quality of answer module content. We also studied a more advanced answer interface, where multiple answers are shown on the search engine results page (SERP). Our results show that users focus more extensively than normal on the top items in the result list when answers are provided. The existence of the answer module helps to improve user engagement on SERPs, reduces user effort, and promotes user satisfaction during the search process. Furthermore, we investigate how the question type -- factoid or non-factoid -- affects user interaction patterns. This work provides insight into the design of SERPs that includes direct answers to queries, including when answers should be shown.

[1]  Yiqun Liu,et al.  From Skimming to Reading: A Two-stage Examination Model for Web Search , 2014, CIKM.

[2]  Bing Pan,et al.  The determinants of web page viewing behavior: an eye-tracking study , 2004, ETRA.

[3]  Robert Villa,et al.  Factors affecting click-through behavior in aggregated search interfaces , 2010, CIKM.

[4]  O. J. Dunn Multiple Comparisons Using Rank Sums , 1964 .

[5]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[6]  Yvonne Kammerer,et al.  The Role of Search Result Position and Source Trustworthiness in the Selection of Web Search Results When Using a List or a Grid Interface , 2014, Int. J. Hum. Comput. Interact..

[7]  Edward Cutrell,et al.  What are you looking for?: an eye-tracking study of information usage in web search , 2007, CHI.

[8]  Ling Xia,et al.  Eye tracking and online search: Lessons learned and challenges ahead , 2008, J. Assoc. Inf. Sci. Technol..

[9]  Thorsten Joachims,et al.  Eye-tracking analysis of user behavior in WWW search , 2004, SIGIR '04.

[10]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .

[11]  Jaime Teevan,et al.  Visual snippets: summarizing web pages for search and revisitation , 2009, CHI.

[12]  Allison Woodruff,et al.  A comparison of the use of text summaries, plain thumbnails, and enhanced thumbnails for Web search tasks , 2002, J. Assoc. Inf. Sci. Technol..

[13]  Alistair Moffat,et al.  What Users Do: The Eyes Have It , 2013, AIRS.

[14]  B. Tatler,et al.  The prominence of behavioural biases in eye guidance , 2009 .

[15]  Charles L. A. Clarke,et al.  The influence of caption features on clickthrough patterns in web search , 2007, SIGIR.

[16]  Alessandro Moschitti,et al.  Automatic Feature Engineering for Answer Selection and Extraction , 2013, EMNLP.

[17]  W. Bruce Croft,et al.  Answer Interaction in Non-factoid Question Answering Systems , 2019, CHIIR.

[18]  Rossano Schifanella,et al.  Leveraging User Interaction Signals for Web Image Search , 2016, SIGIR.

[19]  Fernando Diaz,et al.  Sources of evidence for vertical selection , 2009, SIGIR.

[20]  Yiqun Liu,et al.  Influence of Vertical Result in Web Search Examination , 2015, SIGIR.

[21]  W. Kruskal,et al.  Use of Ranks in One-Criterion Variance Analysis , 1952 .

[22]  C J Scheirer,et al.  The analysis of ranked data derived from completely randomized factorial designs. , 1976, Biometrics.

[23]  Daniel E. Rose,et al.  Understanding user goals in web search , 2004, WWW '04.

[24]  Yiqun Liu,et al.  Incorporating vertical results into search click models , 2013, SIGIR.

[25]  Andrew T. Duchowski,et al.  Eye Tracking Methodology: Theory and Practice , 2003, Springer London.

[26]  Thorsten Joachims,et al.  Accurately interpreting clickthrough data as implicit feedback , 2005, SIGIR '05.

[27]  Yiqun Liu,et al.  Human Behavior Inspired Machine Reading Comprehension , 2019, SIGIR.

[28]  Diane Kelly,et al.  Methods for Evaluating Interactive Information Retrieval Systems with Users , 2009, Found. Trends Inf. Retr..

[29]  Chirag Shah,et al.  User Activity Patterns During Information Search , 2015, ACM Trans. Inf. Syst..

[30]  Jianfeng Gao,et al.  A Human Generated MAchine Reading COmprehension Dataset , 2018 .

[31]  Ravi Kumar,et al.  Optimizing two-dimensional search results presentation , 2011, WSDM '11.

[32]  Matthias Hagen,et al.  What Users Ask a Search Engine: Analyzing One Billion Russian Question Queries , 2015, CIKM.

[33]  Peter Bailey,et al.  Better Effectiveness Metrics for SERPs, Cards, and Rankings , 2018, ADCS.

[34]  Guido Zuccon,et al.  Health Cards for Consumer Health Search , 2019, SIGIR.

[35]  Mark Sanderson,et al.  Advantages of query biased summaries in information retrieval , 1998, SIGIR '98.

[36]  Nick Craswell,et al.  An experimental comparison of click position-bias models , 2008, WSDM '08.

[37]  Susan T. Dumais,et al.  Individual differences in gaze patterns for web search , 2010, IIiX.

[38]  Yiqun Liu,et al.  Understanding Reading Attention Distribution during Relevance Judgement , 2018, CIKM.

[39]  Eni Mustafaraj,et al.  Investigating the Effects of Google's Search Engine Result Page in Evaluating the Credibility of Online News Sources , 2018, WebSci.

[40]  Eugene Agichtein,et al.  ViewSer: enabling large-scale remote user studies of web search examination and interaction , 2011, SIGIR.

[41]  Meng Wang,et al.  Investigating Examination Behavior of Image Search Users , 2017, SIGIR.

[42]  Päivi Majaranta,et al.  Eye-Tracking Reveals the Personal Styles for Search Result Evaluation , 2005, INTERACT.

[43]  R. Shillcock,et al.  Low-level predictive inference in reading: the influence of transitional probabilities on eye movements , 2003, Vision Research.

[44]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.