Slow Search: Information Retrieval without Time Constraints

Significant time and effort has been devoted to reducing the time between query receipt and search engine response, and for good reason. Research suggests that even slightly higher retrieval latency by Web search engines can lead to dramatic decreases in users' perceptions of result quality and engagement with the search results. While users have come to expect rapid responses from search engines, recent advances in our understanding of how people find information suggest that there are scenarios where a search engine could take significantly longer than a fraction of a second to return relevant content. This raises the important question: What would search look like if search engines were not constrained by existing expectations for speed? In this paper, we explore slow search, a class of search where traditional speed requirements are relaxed in favor of a high quality search experience. Via large-scale log analysis and user surveys, we examine how individuals value time when searching. We confirm that speed is important, but also show that there are many search situations where result quality is more important. This highlights intriguing opportunities for search systems to support new search experiences with high quality result content that takes time to identify. Slow search has the potential to change the search experience as we know it.

[1]  Scott Counts,et al.  mimir: a market-based real-time question and answer service , 2009, CHI.

[2]  Meredith Ringel Morris,et al.  What do people ask their social networks, and why?: a survey study of status message q&a behavior , 2010, CHI.

[3]  Henry C. Lucas,et al.  System response time operator productivity, and job satisfaction , 1983, CACM.

[4]  ShneidermanBen Response time and display rate in human performance with computers , 1984 .

[5]  Michael S. Bernstein,et al.  Direct answers for search queries in the long tail , 2012, CHI.

[6]  Jin-Woo Jeong,et al.  A Crowd-Powered Socially Embedded Search Engine , 2013, ICWSM.

[7]  Mark S. Ackerman,et al.  QuME: a mechanism to support expertise finding in online help-seeking communities , 2007, UIST.

[8]  Torsten Suel,et al.  Improved techniques for result caching in web search engines , 2009, WWW '09.

[9]  Paul N. Bennett,et al.  Pairwise ranking aggregation in a crowdsourced setting , 2013, WSDM.

[10]  Lydia B. Chilton,et al.  Addressing people's information needs directly in a web search result page , 2011, WWW.

[11]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[12]  Ryen W. White,et al.  Characterizing and supporting cross-device search tasks , 2013, WSDM.

[13]  Johan Redström,et al.  Slow Technology – Designing for Reflection , 2001, Personal and Ubiquitous Computing.

[14]  Charles L. A. Clarke,et al.  Modeling user variance in time-biased gain , 2012, HCIR '12.

[15]  Uwe Siedentopp,et al.  Slow Food , 2018, Deutsche Zeitschrift für Akupunktur.

[16]  Ryen W. White,et al.  Exploratory Search: Beyond the Query-Response Paradigm , 2009, Exploratory Search: Beyond the Query-Response Paradigm.

[17]  Henry Lieberman,et al.  Exploring the Web with reconnaissance agents , 2001, Commun. ACM.

[18]  Mark Halsey,et al.  Searching the World Wide Web in Low-Connectivity Communities , 2002 .

[19]  Fang Wu,et al.  Human Speed-Accuracy Tradeoffs in Search , 2010, 2011 44th Hawaii International Conference on System Sciences.

[20]  Mario Gerla,et al.  Analyzing crowd workers in mobile pay-for-answer q&a , 2013, CHI.

[21]  Susan T. Dumais Task-based Search: A Search Engine Perspective , 2013 .

[22]  Thorsten Joachims,et al.  Web Watcher: A Tour Guide for the World Wide Web , 1997, IJCAI.

[23]  Thomas M. Donnelly,et al.  Slow and Steady Wins the Race , 2005, Lab Animal.

[24]  Xiaoying Gao,et al.  Improving AbraQ: An Automatic Query Expansion Algorithm , 2010, 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology.

[25]  Charles L. A. Clarke,et al.  Time-based calibration of effectiveness measures , 2012, SIGIR '12.

[26]  Jaime Teevan,et al.  Large scale query log analysis of re-finding , 2010, WSDM '10.

[27]  Karl Gyllstrom,et al.  A comparison of query and term suggestion features for interactive searching , 2009, SIGIR.

[28]  Ryen W. White,et al.  Characterizing and predicting search engine switching behavior , 2009, CIKM.

[29]  Susan T. Dumais,et al.  Actions, answers, and uncertainty: a decision-making perspective on Web-based question answering , 2004, Inf. Process. Manag..

[30]  Ben Shneiderman,et al.  Response time and display rate in human performance with computers , 1984, CSUR.

[31]  Jimmy J. Lin,et al.  Ranking under temporal constraints , 2010, CIKM.

[32]  Luo Si,et al.  Exploration of the tradeoff between effectiveness and efficiency for results merging in federated search , 2007, SIGIR.

[33]  Meredith Ringel Morris,et al.  SearchBuddies: Bringing Search Engines into the Conversation , 2012, ICWSM.

[34]  Lakshminarayanan Subramanian,et al.  Web search and browsing behavior under poor connectivity , 2009, CHI Extended Abstracts.

[35]  Luiz André Barroso,et al.  The tail at scale , 2013, CACM.

[36]  Xiaoying Gao,et al.  Exploiting underrepresented query aspects for automatic query expansion , 2007, KDD '07.

[37]  Marian Dörk Taking Our Sweet Time to Search , 2013 .

[38]  Jure Leskovec,et al.  Discovering value from community activity on focused question answering sites: a case study of stack overflow , 2012, KDD.

[39]  Charles L. A. Clarke,et al.  The TREC 2006 Terabyte Track , 2006, TREC.

[40]  T. Joachims WebWatcher : A Tour Guide for the World Wide Web , 1997 .

[41]  Ryen W. White,et al.  Enhancing web search by promoting multiple search engine use , 2008, SIGIR '08.

[42]  Ricardo Baeza-Yates,et al.  Efficiency trade-offs in two-tier web search systems , 2009, SIGIR.

[43]  Ryen W. White,et al.  Modeling and analysis of cross-session search tasks , 2011, SIGIR.