User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.
[1]
B. J. Fogg,et al.
Web credibility research: a method for online experiments and early study results
,
2001,
CHI Extended Abstracts.
[2]
Yochai Benkler,et al.
Coase's Penguin, or Linux and the Nature of the Firm
,
2001,
ArXiv.
[3]
Jared M. Spool,et al.
Testing web sites: five users is nowhere near enough
,
2001,
CHI Extended Abstracts.
[4]
Aniket Kittur,et al.
He says, she says: conflict and coordination in Wikipedia
,
2007,
CHI.
[5]
Jan Stage,et al.
What happened to remote usability testing?: an empirical study of three methods
,
2007,
CHI.
[6]
Martin Wattenberg,et al.
The Hidden Order of Wikipedia
,
2007,
HCI.