Pay-per-Question: Towards Targeted Q&A with Payments

Online question and answer (Q&A) services are facing key challenges to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a "targeted" Q&A model where users to ask questions to a target expert by paying the price. In this paper, we perform a case study on two emerging targeted Q&A systems Fenda (China) and Whale (US) to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.

[1]  Qi Su,et al.  Internet-scale collection of human-reviewed data , 2007, WWW '07.

[2]  Benjamin V. Hanrahan,et al.  Modeling problem difficulty and expertise in stackoverflow , 2012, CSCW.

[3]  Robert E. Kraut,et al.  Why pay?: exploring how financial incentives are used for question & answer , 2010, CHI.

[4]  Aniket Kittur,et al.  A Market in Your Social Network: The Effects of Extrinsic Rewards on Friendsourcing and Relationships , 2016, CHI.

[5]  Dan Wu,et al.  Comparing IPL2 and Yahoo! Answers: A Case Study of Digital Reference and Community Based Question Answering , 2014 .

[6]  Uichin Lee,et al.  Understanding mobile Q&A usage: an exploratory study , 2012, CHI.

[7]  Scott Counts,et al.  mimir: a market-based real-time question and answer service , 2009, CHI.

[8]  Yan Chen,et al.  Re-examining price as a predictor of answer quality in an online q&a site , 2010, CHI.

[9]  Sheizaf Rafaeli,et al.  Predictors of answer quality in online Q&A sites , 2008, CHI.

[10]  Zhenhui Jiang,et al.  Knowledge contribution in problem solving virtual communities: the mediating role of individual motivations , 2007, SIGMIS CPR '07.

[11]  Gang Wang,et al.  Wisdom in the social crowd: an analysis of quora , 2013, WWW.

[12]  Baoxin Li,et al.  Towards Predicting the Best Answers in Community-based Question-Answering Services , 2013, ICWSM.

[13]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[14]  Lada A. Adamic,et al.  Knowledge sharing and yahoo answers: everyone knows something , 2008, WWW.

[15]  Gang Wang,et al.  Unsupervised Clickstream Clustering for User Behavior Analysis , 2016, CHI.

[16]  Gang Wang,et al.  Anatomy of a Personalized Livestreaming System , 2016, Internet Measurement Conference.

[17]  Jeffrey Pomerantz,et al.  Evaluating and predicting answer quality in community QA , 2010, SIGIR.

[18]  Mária Bieliková,et al.  A Comprehensive Survey and Classification of Approaches for Community Question Answering , 2016, ACM Trans. Web.

[19]  Mark S. Ackerman,et al.  Questions in, knowledge in?: a study of naver's question answering community , 2009, CHI.

[20]  Aditya G. Parameswaran,et al.  Evaluating the crowd with confidence , 2013, KDD.

[21]  Lionel P. Robert,et al.  When Does More Money Work? Examining the Role of Perceived Fairness in Pay on the Performance Quality of Crowdworkers , 2017, ICWSM.

[22]  Tom Lunt,et al.  ‘Are you listening please?’ The advantages of electronic audio feedback compared to written feedback , 2010 .

[23]  Mario Gerla,et al.  Analyzing crowd workers in mobile pay-for-answer q&a , 2013, CHI.

[24]  Lena Mamykina,et al.  Design lessons from the fastest q&a site in the west , 2011, CHI.

[25]  Ravi Kumar,et al.  Great Question! Question Quality in Community Q&A , 2014, ICWSM.

[26]  Daphne R. Raban,et al.  The Incentive Structure in an Online Information Market , 2008, J. Assoc. Inf. Sci. Technol..

[27]  Jeffrey Nichols,et al.  Question routing to user communities , 2013, CIKM.

[28]  Lauren E. Sherman,et al.  The effects of text, audio, video, and in-person communication on bonding between friends , 2013 .

[29]  Irwin King,et al.  Routing questions to appropriate answerers in community question answering services , 2010, CIKM.

[30]  Benjamin Edelman,et al.  Earnings and Ratings at Google Answers , 2012 .

[31]  Scott Grant,et al.  Encouraging user behaviour with achievements: An empirical study , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[32]  Teck-Hua Ho,et al.  Knowledge Market Design: A Field Experiment at Google Answers , 2010 .

[33]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[34]  Gang Wang,et al.  Northeastern University , 2021, IEEE Pulse.

[35]  Jeffrey Nichols,et al.  Analyzing the quality of information solicited from targeted strangers on social media , 2013, CSCW '13.

[36]  Santo Fortunato,et al.  Community detection in graphs , 2009, ArXiv.

[37]  Maliha S. Nash,et al.  Handbook of Parametric and Nonparametric Statistical Procedures , 2001, Technometrics.

[38]  Feng Xu,et al.  Predicting long-term impact of CQA posts: a comprehensive viewpoint , 2014, KDD.

[39]  Ping Wang,et al.  Which Size Matters? Effects of Crowd Size on Solution Quality in Big Data Q&A Communities , 2017, ICWSM.

[40]  Christy M. K. Cheung,et al.  Why users keep answering questions in online question answering communities: A theoretical and empirical investigation , 2013, Int. J. Inf. Manag..

[41]  Yiannis Kompatsiaris,et al.  Incentive Mechanisms for Crowdsourcing Platforms , 2016, INSCI.

[42]  Cliff Lampe,et al.  Who wants to know?: question-asking and answering practices among facebook users , 2013, CSCW '13.

[43]  Daniele Quercia,et al.  The Social World of Content Abusers in Community Question Answering , 2015, WWW.

[44]  Joseph A. Konstan,et al.  Evolution of Experts in Question Answering Communities , 2012, ICWSM.

[45]  Mária Bieliková,et al.  Why is Stack Overflow Failing? Preserving Sustainability in Community Question Answering , 2016, IEEE Software.