Task recommendation in crowdsourcing systems

In crowdsourcing systems, tasks are distributed to networked people to complete such that a company's production cost can be greatly reduced. Obviously, it is not efficient that the amount of time for a worker spent on selecting a task is comparable with that spent on working on a task, but the monetary reward of a task is just a small amount. The available worker history makes it possible to mine workers' preference on tasks and to provide favorite recommendations. Our exploratory study on the survey results collected from Amazon Mechanical Turk (MTurk) shows that workers' histories can reflect workers' preferences on tasks in crowdsourcing systems. Task recommendation can help workers to find their right tasks faster as well as help requesters to receive good quality output quicker. However, previously proposed classification based task recommendation approach only considers worker performance history, but does not explore worker task searching history. In our paper, we propose a task recommendation framework for task preference modeling and preference-based task recommendation, aiming to recommend tasks to workers who are likely to prefer to work on and provide output that accepted by requesters. We consider both worker performance history and worker task searching history to reflect workers' task preference more accurately. To the best of our knowledge, we are the first to use matrix factorization for task recommendation in crowdsourcing systems.

[1]  Patrick Seemann,et al.  Matrix Factorization Techniques for Recommender Systems , 2014 .

[2]  Hongbo Deng,et al.  A social recommendation framework based on multi-scale continuous conditional random fields , 2009, CIKM.

[3]  Michael R. Lyu,et al.  Online learning for collaborative filtering , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[4]  Michael R. Lyu,et al.  TagRec: Leveraging Tagging Wisdom for Recommendation , 2009, 2009 International Conference on Computational Science and Engineering.

[5]  Michael R. Lyu,et al.  Effective missing data prediction for collaborative filtering , 2007, SIGIR.

[6]  Kwong-Sak Leung,et al.  Task Matching in Crowdsourcing , 2011, 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing.

[7]  Reid Priedhorsky Yet Successful : A Position Paper for CHI 2011 Workshop on Crowdsourcing and Human Computation , 2011 .

[8]  Brian A Vander Schee Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business , 2009 .

[9]  Michael R. Lyu,et al.  Response Aware Model-Based Collaborative Filtering , 2012, UAI.

[10]  Ruslan Salakhutdinov,et al.  Probabilistic Matrix Factorization , 2007, NIPS.

[11]  Eric Horvitz,et al.  Crowdsourcing General Computation , 2011 .

[12]  Michael R. Lyu,et al.  Fused Matrix Factorization with Geographical and Social Influence in Location-Based Social Networks , 2012, AAAI.

[13]  Jaime G. Carbonell,et al.  Towards Task Recommendation in Micro-Task Markets , 2011, Human Computation.

[14]  Michael R. Lyu,et al.  Learning to recommend with social trust ensemble , 2009, SIGIR.

[15]  Irwin King,et al.  A Survey of Human Computation Systems , 2009, 2009 International Conference on Computational Science and Engineering.

[16]  Lydia B. Chilton,et al.  Task search in a human computation market , 2010, HCOMP '10.

[17]  Kwong-Sak Leung,et al.  A Survey of Crowdsourcing Systems , 2011, 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing.