Incentivizing social media users for mobile crowdsourcing

We focus on the problem of contributor-task matching in mobile crowd-sourcing. The idea is to identify existing social media users who possess domain expertise (e.g., photography) and incentivize them to perform some tasks (e.g., take quality pictures). To this end, we propose a framework that extracts the potential contributors' expertise based on their social media activity and determines incentives for them within the constraint of a budget. This framework does so by preferentially targeting contributors who are likely to offer quality content. We evaluate our framework on Flickr data for the entire city of Barcelona and show that it ensures high levels of task quality and wide geographic coverage, all without compromising fairness. HighlightsWe aim to engage users for mobile crowdsourcing from existing online communities.We study the joint task-to-crowdworker assignment and budget allocation problem.We propose a general data-driven and quality-aware incentive framework.Evaluation with user profile information achieves high quality of accomplished tasks.Results show good coverage avoiding discrimination against less popular tasks.

[1]  Hwee Pink Tan,et al.  Optimal Prizes for All-Pay Contests in Heterogeneous Crowdsourcing , 2014, 2014 IEEE 11th International Conference on Mobile Ad Hoc and Sensor Systems.

[2]  M. Haklay How Good is Volunteered Geographical Information? A Comparative Study of OpenStreetMap and Ordnance Survey Datasets , 2010 .

[3]  Marina Fruehauf,et al.  Nonlinear Programming Analysis And Methods , 2016 .

[4]  Yu Wang,et al.  Dynamic Participant Recruitment of Mobile Crowd Sensing for Heterogeneous Sensing Tasks , 2015, 2015 IEEE 12th International Conference on Mobile Ad Hoc and Sensor Systems.

[5]  Sajal K. Das,et al.  Incentive Mechanisms for Participatory Sensing , 2015, ACM Trans. Sens. Networks.

[6]  Guihai Chen,et al.  Pay as How Well You Do: A Quality Based Incentive Mechanism for Crowdsensing , 2015, MobiHoc.

[7]  Giovanni Quattrone,et al.  Mind the map: the impact of culture and economic affluence on crowd-mapping behaviours , 2014, CSCW.

[8]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[9]  Licia Capra,et al.  Quality control for real-time ubiquitous crowdsourcing , 2011, UbiCrowd '11.

[10]  M. Six Silberman,et al.  Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.

[11]  Loren G. Terveen,et al.  Capturing quality: retaining provenance for curated volunteer monitoring data , 2014, CSCW.

[12]  Aniket Kittur,et al.  CrowdForge: crowdsourcing complex work , 2011, UIST.

[13]  Baik Hoh,et al.  Dynamic pricing incentive for participatory sensing , 2010, Pervasive Mob. Comput..

[14]  Jorge Gonçalves,et al.  Situated crowdsourcing using a market model , 2014, UIST.

[15]  Jukka Riekki,et al.  Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real‐Life Experiments in Oulu , 2015 .

[16]  Rossano Schifanella,et al.  An Image Is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures , 2015, ICWSM.

[17]  T. Graepel,et al.  Private traits and attributes are predictable from digital records of human behavior , 2013, Proceedings of the National Academy of Sciences.

[18]  Miguel A. Labrador,et al.  A location-based incentive mechanism for participatory sensing systems with budget constraints , 2012, 2012 IEEE International Conference on Pervasive Computing and Communications.

[19]  Lydia B. Chilton,et al.  TurKit: Tools for iterative tasks on mechanical turk , 2009, 2009 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC).

[20]  Jiming Chen,et al.  Toward optimal allocation of location dependent tasks in crowdsensing , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[21]  Jorge Gonçalves,et al.  Projective testing of diurnal collective emotion , 2014, UbiComp.

[22]  Deborah Estrin,et al.  Recruitment Framework for Participatory Sensing Data Collections , 2010, Pervasive.

[23]  Mor Naaman,et al.  The motivations and experiences of the on-demand mobile workforce , 2014, CSCW.

[24]  Paolo Toth,et al.  Knapsack Problems: Algorithms and Computer Implementations , 1990 .

[25]  Vikas Sindhwani,et al.  Data Quality from Crowdsourcing: A Study of Annotation Selection Criteria , 2009, HLT-NAACL 2009.

[26]  Björn Hartmann,et al.  CommunitySourcing: engaging local crowds to perform expert work via physical kiosks , 2012, CHI.

[27]  Cyrus Rashtchian,et al.  Collecting Image Annotations Using Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.

[28]  Chris Callison-Burch,et al.  Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk , 2009, EMNLP.

[29]  Giovanni Quattrone,et al.  Putting ubiquitous crowd-sourcing into context , 2013, CSCW '13.

[30]  Daniel Pérez Palomar,et al.  Practical algorithms for a family of waterfilling solutions , 2005, IEEE Transactions on Signal Processing.

[31]  Siddharth Suri,et al.  Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.

[32]  Farnoush Banaei Kashani,et al.  Efficient viewpoint assignment for urban texture documentation , 2009, GIS.

[33]  Merkourios Karaliopoulos,et al.  First learn then earn: optimizing mobile crowdsensing campaigns through data-driven user profiling , 2016, MobiHoc.

[34]  Chien-Ju Ho,et al.  Adaptive Task Assignment for Crowdsourced Classification , 2013, ICML.

[35]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[36]  Cyrus Shahabi,et al.  GeoCrowd: enabling query answering with spatial crowdsourcing , 2012, SIGSPATIAL/GIS.

[37]  Salil S. Kanhere,et al.  A Reputation Framework for Social Participatory Sensing Systems , 2014, Mob. Networks Appl..

[38]  Lin Gao,et al.  Providing long-term participation incentive in participatory sensing , 2015, 2015 IEEE Conference on Computer Communications (INFOCOM).

[39]  M. Goodchild Citizens as sensors: the world of volunteered geography , 2007 .

[40]  Jorge Gonçalves,et al.  Crowdsourcing Queue Estimations in Situ , 2016, CSCW.