Mobile Interfaces for Crowdsourced Multimedia Microtasks

Crowd sourced mobile microtasking represents a significant opportunity in emerging economies such as India, that are characterized by the high levels of mobile phone penetration and large numbers of educated people that are unemployed or underemployed. Indeed, mobile phones have been used successfully in many parts of the world for microtasking, primarily for crowd sourced data collection, and text or image based tasks. More complex tasks such as annotation of multimedia such as audio or video have traditionally been confined to desktop interfaces. With the rapid evolution in the multimedia capabilities of mobile phones in these geographies, we believe that the nature of microtasks carried out on these devices, as well as the design of interfaces for such microtasks, warrants investigation. In this paper we explore the design of mobile phone interfaces for a set of multimedia-based microtasks on feature phones, which represent the vast majority of multimedia-capable mobile phones in these geographies. As part of an initial study using paper prototypes, we evaluate three types of multimedia content: images, audio and video, and three interfaces for data input: Direct Entry, Scroll Key Input and Key Mapping. We observe that while there are clear interface preferences for image and audio tasks, the user preference for video tasks varies based on the 'task complexity' - the 'density' of data the annotator has to deal with. In a second study, we prototype two different interfaces for video-based annotation tasks - a single screen input method, and a two screen phased interface. We evaluate the two interface designs and the three data input methods studied earlier by means of a user study with 36 participants. Our findings show that where less dense data was concerned; participants prefer Key Mapping as the input technique. For dense data, while participants prefer Key Mapping, our data shows that the accuracy of data input with Key Mapping is significantly lower than that with Scroll Key Input. The study also provides insight into the game plan each user develops and employs to input data. We believe these findingswill enable other researchers to build effective user interfaces for mobile microtasks, and be of value to UI developers, HCI researchers and microtask designers.

[1]  Tatsuo Nakajima,et al.  Mobile Image Search via Local Crowd: A User Study , 2011, 2011 IEEE 17th International Conference on Embedded and Real-Time Computing Systems and Applications.

[2]  G. Michael Youngblood,et al.  Wallah: design and evaluation of a task-centric mobile-based crowdsourcing platform , 2014, MobiQuitous.

[3]  James Davis,et al.  Exploring Employment Opportunities through Microtasks via Cybercafes , 2012, 2012 IEEE Global Humanitarian Technology Conference.

[4]  Michael S. Bernstein,et al.  Crowds in two seconds: enabling realtime crowd-powered interfaces , 2011, UIST.

[5]  Nathan Eagle,et al.  txteagle: Mobile Crowdsourcing , 2009, HCI.

[6]  Deepak Ganesan,et al.  mCrowd: a platform for mobile crowdsourcing , 2009, SenSys '09.

[7]  Bill Thies,et al.  Paid Crowdsourcing as a Vehicle for Global Development , 2011 .

[8]  Jonathan Arnowitz,et al.  The Effective Prototyping Process , 2007 .

[9]  Edward Cutrell,et al.  mClerk: enabling mobile crowdsourcing in developing regions , 2012, CHI.

[10]  Jonathan Arnowitz,et al.  Effective Prototyping for Software Makers , 2006 .

[11]  Vikas Kumar,et al.  CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones , 2010, MobiSys '10.

[12]  Constantinos K. Coursaris,et al.  A Meta-Analytical Review of Empirical Mobile Usability Studies , 2011 .

[13]  Alireza Sahami Shirazi,et al.  Location-based crowdsourcing: extending crowdsourcing to the real world , 2010, NordiCHI.

[14]  Vassilis Kostakos,et al.  Towards Real-time Emergency Response using Crowd Supported Analysis of Social Media , 2011 .