A Crowd of Your Own: Crowdsourcing for On-Demand Personalization

Personalization is a way for computers to support people’s diverse interests and needs by providing content tailored to the individual. While strides have been made in algorithmic approaches to personalization, most require access to a significant amount of data. However, even when data is limited online crowds can be used to infer an individual’s personal preferences. Aided by the diversity of tastes among online crowds and their ability to understand others, we show that crowdsourcing is an effective on-demand tool for personalization. Unlike typical crowdsourcing approaches that seek a ground truth, we present and evaluate two crowdsourcing approaches designed to capture personal preferences. The first, taste-matching , identifies workers with similar taste to the requester and uses their taste to infer the requester’s taste. The second, taste-grokking , asks workers to explicitly predict the requester’s taste based on training examples. These techniques are evaluated on two subjective tasks, personalized image recommendation and tailored textual summaries. Taste-matching and taste-grokking both show improvement over the use of generic workers, and have different benefits and drawbacks depending on the complexity of the task and the variability of the taste space.

[1]  H. Marmorstein,et al.  The Value of Time Spent in Price-Comparison Shopping: Survey and Experimental Evidence , 1992 .

[2]  Susan T. Dumais,et al.  To personalize or not to personalize: modeling queries with variation in user intent , 2008, SIGIR '08.

[3]  Adam Tauman Kalai,et al.  Adaptively Learning the Crowd Kernel , 2011, ICML.

[4]  Michael S. Bernstein,et al.  EmailValet: managing email overload through private, accountable crowdsourcing , 2013, CSCW.

[5]  Edith Law,et al.  Human Computation , 2011, Human Computation.

[6]  Michael S. Bernstein,et al.  Crowds in two seconds: enabling realtime crowd-powered interfaces , 2011, UIST.

[7]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[8]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[9]  Benjamin B. Bederson,et al.  Human computation: a survey and taxonomy of a growing field , 2011, CHI.

[10]  Sriram Subramanian,et al.  Talking about tactile experiences , 2013, CHI.

[11]  Catherine C. Marshall,et al.  Are Some Tweets More Interesting Than Others? #HardQuestion , 2013, HCIR '13.

[12]  Michael S. Bernstein,et al.  Personalization via friendsourcing , 2010, TCHI.

[13]  Joseph A. Konstan,et al.  Who predicts better?: results from an online study comparing humans and an online recommender system , 2008, RecSys '08.

[14]  Chris Callison-Burch,et al.  Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription , 2010, NAACL.

[15]  Thomas Hofmann,et al.  Latent semantic models for collaborative filtering , 2004, TOIS.

[16]  Krzysztof Z. Gajos,et al.  Human computation tasks with global constraints , 2012, CHI.

[17]  Luis von Ahn Human Computation , 2008, ICDE.