Matching and Grokking: Approaches to Personalized Crowdsourcing

Personalization aims to tailor content to a person's individual tastes. As a result, the tasks that benefit from personalization are inherently subjective. Many of the most robust approaches to personalization rely on large sets of other people's preferences. However, existing preference data is not always available. In these cases, we propose leveraging online crowds to provide on-demand personalization. We introduce and evaluate two methods for personalized crowdsourcing: taste-matching for finding crowd workers who are similar to the requester, and taste-grokking, where crowd workers explicitly predict the requester's tastes. Both approaches show improvement over a nonpersonalized baseline, with taste-grokking performing well in simpler tasks and taste-matching performing well with larger crowds and tasks with latent decision-making variables.

[1]  Thomas Hofmann,et al.  Latent semantic models for collaborative filtering , 2004, TOIS.

[2]  H. Marmorstein,et al.  The Value of Time Spent in Price-Comparison Shopping: Survey and Experimental Evidence , 1992 .

[3]  Chris Callison-Burch,et al.  Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription , 2010, NAACL.

[4]  HofmannThomas Latent semantic models for collaborative filtering , 2004 .

[5]  Michael S. Bernstein,et al.  Crowds in two seconds: enabling realtime crowd-powered interfaces , 2011, UIST.

[6]  Michael S. Bernstein,et al.  Personalization via friendsourcing , 2010, TCHI.

[7]  Krzysztof Z. Gajos,et al.  Human computation tasks with global constraints , 2012, CHI.

[8]  C. J. van Rijsbergen,et al.  Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval , 1987, SIGIR 1987.

[9]  Michael S. Bernstein,et al.  EmailValet: managing email overload through private, accountable crowdsourcing , 2013, CSCW.

[10]  John Riedl,et al.  GroupLens: an open architecture for collaborative filtering of netnews , 1994, CSCW '94.

[11]  Catherine C. Marshall,et al.  Are Some Tweets More Interesting Than Others? #HardQuestion , 2013, HCIR '13.

[12]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[13]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[14]  Joseph A. Konstan,et al.  Who predicts better?: results from an online study comparing humans and an online recommender system , 2008, RecSys '08.

[15]  Benjamin B. Bederson,et al.  Human computation: a survey and taxonomy of a growing field , 2011, CHI.

[16]  Susan T. Dumais,et al.  To personalize or not to personalize: modeling queries with variation in user intent , 2008, SIGIR '08.

[17]  Adam Tauman Kalai,et al.  Adaptively Learning the Crowd Kernel , 2011, ICML.