The Complexity of Crowdsourcing: Theoretical Problems in Human Computation

What does theoretical computer science have to say about human computation? We identify three problems at the intersection of crowdsourcing, operations research, and theoretical computer science whose solution would have a major impact on the design, evaluation, and construction of real crowdsourcing systems. In some cases, these problems can let us sidestep apparently difficult HCI challenges by making appropriate choices at the algorithmic level. In other contexts, theoretical tools provide a formal basis for evaluating the performance of algorithms and classifying the difficulty of tasks in crowdsourcing. Our problems are illustrated through two recent projects. The first, Turkomatic, is an attempt to construct a “universal” algorithm for generating workflows on microtask crowdsourcing platforms. The second, MobileWorks, is a new crowdsourcing engine designed from the bottom up to provide appropriate abstractions between the theoretical elements of human computation systems and interface/design questions. It is hoped that this analysis can spur the development of theoretical frameworks for understanding algorithms involving human computation.

[1]  Laura A. Dabbish,et al.  Designing games with a purpose , 2008, CACM.

[2]  Michael D. Buhrmester,et al.  Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[3]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[4]  Björn Hartmann,et al.  Turkomatic: automatic recursive task and workflow design for mechanical turk , 2011, Human Computation.

[5]  Jeannette M. Wing Five deep questions in computing , 2008, CACM.

[6]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[7]  Dafna Shahaf,et al.  Towards a Theory of AI Completeness , 2007, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.