Towards a generic framework for trustworthy spatial crowdsourcing

Many studies foresee significant future growth in the number of mobile smart phone users, the phone's hardware and software features, and the broadband bandwidth. Therefore, a transformative area of research is to fully utilize this new platform for various tasks, among which the most promising is spatial crowdsourcing. Spatial crowdsourcing (SC) engages individuals, groups, and communities in the act of collecting, analyzing, and disseminating urban, social, and other spatiotemporal information. This new paradigm of data collection has shown to be useful when traditional means fail (e.g., due to disaster), are censored or do not scale in time and space. Two major impediments to the success of spatial crowdsourcing in real-world applications are scalability and trust issues. Without scale considerations, it is impossible to develop a generic multi-campaign spatial crowdsourcing system (SC-system) that can efficiently and in real-time match many requesters' tasks to numerous workers. Without trust, the SC-system cannot evaluate the credibility of the contributed data, rendering it ineffective for replacing the traditional data collection means. In this paper, we survey and study both issues of scale and trust in spatial crowdsourcing.

[1]  Alireza Sahami Shirazi,et al.  Location-based crowdsourcing: extending crowdsourcing to the real world , 2010, NordiCHI.

[2]  Wilfred Ng,et al.  CrowdSeed: query processing on microblogs , 2013, EDBT '13.

[3]  Aditya Ramesh Identifying Reliable Workers Swiftly , 2012 .

[4]  Licia Capra,et al.  Quality control for real-time ubiquitous crowdsourcing , 2011, UbiCrowd '11.

[5]  Cyrus Shahabi,et al.  GeoCrowd: enabling query answering with spatial crowdsourcing , 2012, SIGSPATIAL/GIS.

[6]  Justin Manweiler,et al.  SMILE: encounter-based trust for mobile social services , 2009, CCS.

[7]  Eric Horvitz,et al.  Crowdphysics: Planned and Opportunistic Crowdsourcing for Physical Tasks , 2013, ICWSM.

[8]  Tim Kraska,et al.  CrowdDB: answering queries with crowdsourcing , 2011, SIGMOD '11.

[9]  Alexandre M. Bayen,et al.  Virtual trip lines for distributed privacy-preserving traffic monitoring , 2008, MobiSys '08.

[10]  Daren C. Brabham Crowdsourcing the Public Participation Process for Planning Projects , 2009 .

[11]  Jennifer Widom,et al.  CrowdScreen: algorithms for filtering data with humans , 2012, SIGMOD Conference.

[12]  Jeroen B. P. Vuurens,et al.  How Much Spam Can You Take? An Analysis of Crowdsourcing Results to Increase Accuracy , 2011 .

[13]  Ramachandran Ramjee,et al.  Nericell: rich monitoring of road and traffic conditions using mobile smartphones , 2008, SenSys '08.

[14]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[15]  Wen Hu,et al.  Towards trustworthy participatory sensing , 2009 .

[16]  Mustafa Ergen,et al.  Mobile Broadband: Including WiMAX and LTE , 2010 .

[17]  François Charoy,et al.  CrowdSC: Building Smart Cities with Large-Scale Citizen Participation , 2013, IEEE Internet Computing.

[18]  Deepak Ganesan,et al.  Labor dynamics in a mobile micro-task market , 2013, CHI.

[19]  J. A. Glennona Crowdsourcing geographic information for disaster response: a research , 2010 .

[20]  Romit Roy Choudhury,et al.  Micro-Blog: sharing and querying content through mobile phones and social participation , 2008, MobiSys '08.

[21]  Lei Chen,et al.  Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog Services , 2012, Proc. VLDB Endow..

[22]  Salil S. Kanhere,et al.  Automatic Collection of Fuel Prices from a Network of Mobile Cameras , 2008, DCOSS.

[23]  Michael F. Goodchild,et al.  Please Scroll down for Article International Journal of Digital Earth Crowdsourcing Geographic Information for Disaster Response: a Research Frontier Crowdsourcing Geographic Information for Disaster Response: a Research Frontier , 2022 .

[24]  E. Paulos,et al.  Sensing Atmosphere , 2007 .

[25]  Alessandro Bozzon,et al.  Answering search queries with CrowdSearcher , 2012, WWW.

[26]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[27]  Tim Kraska,et al.  CrowdQ: Crowdsourced Query Understanding , 2013, CIDR.

[28]  Chin-Laung Lei,et al.  A crowdsourceable QoE evaluation framework for multimedia content , 2009, ACM Multimedia.

[29]  Landon P. Cox,et al.  LiveCompare: grocery bargain hunting through participatory sensing , 2009, HotMobile '09.

[30]  Beng Chin Ooi,et al.  CDAS: A Crowdsourcing Data Analytics System , 2012, Proc. VLDB Endow..

[31]  Serge Abiteboul,et al.  Corroborating information from disagreeing views , 2010, WSDM '10.

[32]  Yuandong Tian,et al.  Learning from crowds in the presence of schools of thought , 2012, KDD.

[33]  Rob Miller,et al.  Crowdsourced Databases: Query Processing with People , 2011, CIDR.

[34]  David Wetherall,et al.  Toward trustworthy mobile sensing , 2010, HotMobile '10.

[35]  Aditya G. Parameswaran,et al.  Evaluating the crowd with confidence , 2013, KDD.

[36]  David R. Karger,et al.  Human-powered Sorts and Joins , 2011, Proc. VLDB Endow..

[37]  Tim Kraska,et al.  CrowdER: Crowdsourcing Entity Resolution , 2012, Proc. VLDB Endow..

[38]  Yang Zhang,et al.  CarTel: a distributed mobile sensor computing system , 2006, SenSys '06.