Analyzing crowd workers in mobile pay-for-answer q&a

Despite the popularity of mobile pay-for-answer Q&A services, little is known about the people who answer questions on these services. In this paper we examine 18.8 million question and answer pairs from Jisiklog, the largest mobile pay-foranswer Q&A service in Korea, and the results of a complementary survey study of 245 Jisiklog workers. The data are used to investigate key motivators of participation, working strategies of experienced users, and longitudinal interaction dynamics. We find that answerers are rarely motivated by social factors but are motivated by financial incentives and intrinsic motives. Additionally, although answers are provided quickly, an answerer's topic selection tends to be broad, with experienced workers employing unique strategies to answer questions and judge relevance. Finally, analysis of longitudinal working patterns and community dynamics demonstrate the robustness of mobile pay-for-answer Q&A. These findings have significant implications on the design of mobile pay-for-answer Q&A.

[1]  Benjamin Edelman,et al.  Earnings and Ratings at Google Answers , 2012 .

[2]  Carol L. Barry,et al.  Users' Criteria for Relevance Evaluation: A Cross-situational Comparison , 1998, Inf. Process. Manag..

[3]  Boyan Jovanovic,et al.  Learning By Doing and the Choice of Technology , 1994 .

[4]  Daphne R. Raban,et al.  The Incentive Structure in an Online Information Market , 2008, J. Assoc. Inf. Sci. Technol..

[5]  Mark S. Ackerman,et al.  Questions in, knowledge in?: a study of naver's question answering community , 2009, CHI.

[6]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[7]  Scott Counts,et al.  mimir: a market-based real-time question and answer service , 2009, CHI.

[8]  Lydia B. Chilton,et al.  The labor economics of paid crowdsourcing , 2010, EC '10.

[9]  Sheizaf Rafaeli,et al.  Predictors of answer quality in online Q&A sites , 2008, CHI.

[10]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[11]  Mark S. Ackerman,et al.  Activity Lifespan: An Analysis of User Survival Patterns in Online Knowledge Sharing Communities , 2010, ICWSM.

[12]  Mark S. Ackerman,et al.  Crowdsourcing and knowledge sharing: strategic user behavior on taskcn , 2008, EC '08.

[13]  Mark S. Granovetter T H E S T R E N G T H O F WEAK TIES: A NETWORK THEORY REVISITED , 1983 .

[14]  Chirag Shah,et al.  Exploring Characteristics and Effects of User Participation in Online Social Q&A Sites , 2008, First Monday.

[15]  Teck-Hua Ho,et al.  Knowledge Market Design: A Field Experiment at Google Answers , 2010 .

[16]  Lada A. Adamic,et al.  Knowledge sharing and yahoo answers: everyone knows something , 2008, WWW.

[17]  John Joseph Horton,et al.  Online Labor Markets , 2010, WINE.

[18]  R. Noe,et al.  Knowledge sharing: A review and directions for future research , 2010 .

[19]  Panagiotis G. Ipeirotis Analyzing the Amazon Mechanical Turk marketplace , 2010, XRDS.

[20]  Mark S. Ackerman,et al.  Competing to Share Expertise: The Taskcn Knowledge Sharing Community , 2021, ICWSM.

[21]  Robert E. Kraut,et al.  Why pay?: exploring how financial incentives are used for question & answer , 2010, CHI.

[22]  Uichin Lee,et al.  Understanding mobile Q&A usage: an exploratory study , 2012, CHI.