Direct answers for search queries in the long tail

Web search engines now offer more than ranked results. Queries on topics like weather, definitions, and movies may return inline results called answers that can resolve a searcher's information need without any additional interaction. Despite the usefulness of answers, they are limited to popular needs because each answer type is manually authored. To extend the reach of answers to thousands of new information needs, we introduce Tail Answers: a large collection of direct answers that are unpopular individually, but together address a large proportion of search traffic. These answers cover long-tail needs such as the average body temperature for a dog, substitutes for molasses, and the keyboard shortcut for a right-click. We introduce a combination of search log mining and paid crowdsourcing techniques to create Tail Answers. A user study with 361 participants suggests that Tail Answers significantly improved users' subjective ratings of search quality and their ability to solve needs without clicking through to a result. Our findings suggest that search engines can be extended to directly respond to a large new class of queries.

[1]  J. Teevan The Web changes everything. , 2000, Behavioral healthcare tomorrow.

[2]  Susan T. Dumais,et al.  An Analysis of the AskMSR Question-Answering System , 2002, EMNLP.

[3]  Andrei Broder,et al.  A taxonomy of web search , 2002, SIGF.

[4]  Luis Gravano,et al.  Learning to find answers to questions on the Web , 2004, TOIT.

[5]  Gerhard Weikum,et al.  WWW 2007 / Track: Semantic Web Session: Ontologies ABSTRACT YAGO: A Core of Semantic Knowledge , 2022 .

[6]  Jimmy J. Lin An exploration of the principles underlying redundancy-based factoid question answering , 2007, TOIS.

[7]  Ryen W. White,et al.  Studying the use of popular destinations to enhance web search interaction , 2007, SIGIR.

[8]  Michael A. Shepherd,et al.  A Field Study Characterizing Web-based Information Seeking Tasks , 2022 .

[9]  Edward Cutrell,et al.  What are you looking for?: an eye-tracking study of information usage in web search , 2007, CHI.

[10]  Oren Etzioni,et al.  Open Information Extraction from the Web , 2007, CACM.

[11]  Susan T. Dumais,et al.  The web changes everything: understanding the dynamics of web content , 2009, WSDM '09.

[12]  Jane Li,et al.  Good abandonment in mobile and PC internet search , 2009, SIGIR.

[13]  Sofia Stamou,et al.  Queries without Clicks: Successful or Failed Searches? , 2009 .

[14]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[15]  John Le,et al.  Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution , 2010 .

[16]  Pamela Effrein Sandstrom,et al.  Information Foraging Theory: Adaptive Interaction with Information , 2010, J. Assoc. Inf. Sci. Technol..

[17]  Damon Horowitz,et al.  The anatomy of a large-scale social search engine , 2010, WWW '10.

[18]  Lydia B. Chilton,et al.  Exploring iterative and parallel human computation processes , 2010, HCOMP '10.

[19]  Jaime Teevan,et al.  Understanding and predicting personal navigation , 2011, WSDM '11.

[20]  Lydia B. Chilton,et al.  Addressing people's information needs directly in a web search result page , 2011, WWW.

[21]  Edith Law,et al.  Towards Large-Scale Collaborative Planning: Answering High-Level Search Queries Using Human Computation , 2011, AAAI.