Answering search queries with CrowdSearcher

Web users are increasingly relying on social interaction to complete and validate the results of their search activities. While search systems are superior machines to get world-wide information, the opinions collected within friends and expert/local communities can ultimately determine our decisions: human curiosity and creativity is often capable of going much beyond the capabilities of search systems in scouting "interesting" results, or suggesting new, unexpected search directions. Such personalized interaction occurs in most times aside of the search systems and processes, possibly instrumented and mediated by a social network; when such interaction is completed and users resort to the use of search systems, they do it through new queries, loosely related to the previous search or to the social interaction. In this paper we propose CrowdSearcher, a novel search paradigm that embodies crowds as first-class sources for the information seeking process. CrowdSearcher aims at filling the gap between generalized search systems, which operate upon world-wide information - including facts and recommendations as crawled and indexed by computerized systems - with social systems, capable of interacting with real people, in real time, to capture their opinions, suggestions, emotions. The technical contribution of this paper is the discussion of a model and architecture for integrating computerized search with human interaction, by showing how search systems can drive and encapsulate social systems. In particular we show how social platforms, such as Facebook, LinkedIn and Twitter, can be used for crowdsourcing search-related tasks; we demonstrate our approach with several prototypes and we report on our experiment upon real user communities.

[1]  Rob Miller,et al.  Crowdsourced Databases: Query Processing with People , 2011, CIDR.

[2]  Björn Hartmann,et al.  Turkomatic: automatic recursive task and workflow design for mechanical turk , 2011, Human Computation.

[3]  David R. Karger,et al.  Human-powered Sorts and Joins , 2011, Proc. VLDB Endow..

[4]  Nina Mazar,et al.  Large stakes and big mistakes , 2009 .

[5]  Tim Kraska,et al.  CrowdDB: answering queries with crowdsourcing , 2011, SIGMOD '11.

[6]  Ricardo Baeza-Yates,et al.  Next Generation Web Search , 2009, SeCO Workshop.

[7]  Anne Aula,et al.  How does search behavior change as search becomes more difficult? , 2010, CHI.

[8]  Aditya G. Parameswaran,et al.  Answering Queries using Humans, Algorithms and Databases , 2011, CIDR.

[9]  Alessandro Bozzon,et al.  Liquid query: multi-domain exploratory search on the web , 2010, WWW '10.

[10]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[11]  Lydia B. Chilton,et al.  Task search in a human computation market , 2010, HCOMP '10.

[12]  Abhimanu Kumar Modeling Annotator Accuracies for Supervised Learning , 2011 .

[13]  Vikas Kumar,et al.  CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones , 2010, MobiSys '10.

[14]  Alessandro Bozzon,et al.  Exploratory search in multi-domain information spaces with liquid query , 2011, WWW.

[15]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[16]  Alon Y. Halevy,et al.  Crowdsourcing systems on the World-Wide Web , 2011, Commun. ACM.

[17]  Meredith Ringel Morris,et al.  A survey of collaborative web search practices , 2008, CHI.

[18]  Gary Marchionini,et al.  Exploratory search , 2006, Commun. ACM.