Analyzing Payment Based Question and Answering Service

Community based question answering (CQA) services receive a large volume of questions today. It is increasingly challenging to motivate domain experts to give timely answers. Recently, payment-based CQA services explore new incentive models to engage real-world experts and celebrities by allowing them to set a price on their answers. In this paper, we perform a data-driven analysis on Fenda, a payment-based CQA service that has gained initial success with this incentive model. Using a large dataset of 220K paid questions (worth 1 million USD) over two months, we examine how monetary incentives affect different players in the system and their over-time engagement. Our study reveals several key findings: while monetary incentive enables quick answers from experts, it also drives certain users to aggressively game the systems for profits. In addition, this incentive model turns CQA service into a supplier-driven marketplace where users need to proactively adjust their price as needed. We find famous people are unwilling to lower their price, which in turn hurts their income and engagement-level over time. Based on our results, we discuss the implications to future payment-based CQA design.

[1]  Joseph A. Konstan,et al.  Evolution of Experts in Question Answering Communities , 2012, ICWSM.

[2]  Lada A. Adamic,et al.  Knowledge sharing and yahoo answers: everyone knows something , 2008, WWW.

[3]  Benjamin Edelman,et al.  Earnings and Ratings at Google Answers , 2012 .

[4]  Mária Bieliková,et al.  Why is Stack Overflow Failing? Preserving Sustainability in Community Question Answering , 2016, IEEE Software.

[5]  Ravi Kumar,et al.  Great Question! Question Quality in Community Q&A , 2014, ICWSM.

[6]  Gang Wang,et al.  Wisdom in the social crowd: an analysis of quora , 2013, WWW.

[7]  Baoxin Li,et al.  Towards Predicting the Best Answers in Community-based Question-Answering Services , 2013, ICWSM.

[8]  Yan Chen,et al.  Re-examining price as a predictor of answer quality in an online q&a site , 2010, CHI.

[9]  Sheizaf Rafaeli,et al.  Not All Is Gold That Glitters: Response Time & Satisfaction Rates in Yahoo! Answers , 2011, 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing.

[10]  Christy M. K. Cheung,et al.  Why users keep answering questions in online question answering communities: A theoretical and empirical investigation , 2013, Int. J. Inf. Manag..

[11]  Yiannis Kompatsiaris,et al.  Incentive Mechanisms for Crowdsourcing Platforms , 2016, INSCI.

[12]  Daphne R. Raban,et al.  The Incentive Structure in an Online Information Market , 2008, J. Assoc. Inf. Sci. Technol..

[13]  Lena Mamykina,et al.  Design lessons from the fastest q&a site in the west , 2011, CHI.

[14]  Daniele Quercia,et al.  The Social World of Content Abusers in Community Question Answering , 2015, WWW.

[15]  Ben Y. Zhao,et al.  User interactions in social networks and their implications , 2009, EuroSys '09.

[16]  Stefan Dietze,et al.  Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys , 2015, CHI.

[17]  Zhenhui Jiang,et al.  Knowledge contribution in problem solving virtual communities: the mediating role of individual motivations , 2007, SIGMIS CPR '07.

[18]  Jeffrey Nichols,et al.  Question routing to user communities , 2013, CIKM.

[19]  David J. Sheskin,et al.  Handbook of Parametric and Nonparametric Statistical Procedures , 1997 .

[20]  Feng Xu,et al.  Predicting long-term impact of CQA posts: a comprehensive viewpoint , 2014, KDD.

[21]  Tom Lunt,et al.  ‘Are you listening please?’ The advantages of electronic audio feedback compared to written feedback , 2010 .

[22]  Mario Gerla,et al.  Analyzing crowd workers in mobile pay-for-answer q&a , 2013, CHI.

[23]  Jennifer Pepall Putting a price on indigenous knowledge , 1996 .

[24]  Scott Grant,et al.  Encouraging user behaviour with achievements: An empirical study , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[25]  Teck-Hua Ho,et al.  Knowledge Market Design: A Field Experiment at Google Answers , 2010 .

[26]  Irwin King,et al.  Routing questions to appropriate answerers in community question answering services , 2010, CIKM.

[27]  Ben Y. Zhao,et al.  Scaling Microblogging Services with Divergent Traffic Demands , 2011, Middleware.

[28]  Lauren E. Sherman,et al.  The effects of text, audio, video, and in-person communication on bonding between friends , 2013 .

[29]  Gang Wang,et al.  Unsupervised Clickstream Clustering for User Behavior Analysis , 2016, CHI.

[30]  Dave King,et al.  Does it make a difference? Replacing text with audio feedback. , 2008 .

[31]  Mária Bieliková,et al.  A Comprehensive Survey and Classification of Approaches for Community Question Answering , 2016, ACM Trans. Web.

[32]  Mark S. Ackerman,et al.  Questions in, knowledge in?: a study of naver's question answering community , 2009, CHI.

[33]  Dan Wu,et al.  Comparing IPL2 and Yahoo! Answers: A Case Study of Digital Reference and Community Based Question Answering , 2014 .

[34]  Robert E. Kraut,et al.  Why pay?: exploring how financial incentives are used for question & answer , 2010, CHI.

[35]  Uichin Lee,et al.  Understanding mobile Q&A usage: an exploratory study , 2012, CHI.

[36]  Leman Akoglu,et al.  Min(e)d your tags: Analysis of Question response time in StackOverflow , 2014, 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014).

[37]  Jeffrey Pomerantz,et al.  Evaluating and predicting answer quality in community QA , 2010, SIGIR.

[38]  Santo Fortunato,et al.  Community detection in graphs , 2009, ArXiv.

[39]  Scott Counts,et al.  mimir: a market-based real-time question and answer service , 2009, CHI.

[40]  Sheizaf Rafaeli,et al.  Predictors of answer quality in online Q&A sites , 2008, CHI.

[41]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.