LBTask: A Benchmark for Spatial Crowdsourcing Platforms

The popularity of smart phones has made rapid development of crowdsourcing. The emergence of these crowdsourcing software has brought great convenience to our life. Traditional crowdsourcing platforms, such as Amazon Mechanical Turk and Crowdflower, publish some tasks on the site, Workers choose the tasks that are of interest and submit the answers to the tasks by browsing the tasks on the platform. And spatial crowdsourcing platforms (like gMission) are used to assign crowdsourcing tasks related to location. However, most crowdsourcing platforms support a small number of assignment and quality control algorithms. In this paper, a benchmark for spatial crowdsourcing platforms, called LBTask, is designed in order to adapt to the emergence of spatial crowdsourcing tasks, which focuses on solving location aware crowdsourcing tasks. Compared with other crowdsourcing platforms, LBTask can support various assignment and quality control algorithms in the architecture according to different strategies. In the distribution and assignment of tasks, the position factors of tasks and workers are taken into consideration in addition to considering the time and other factors.