More for less: adaptive labeling payments in online labor markets
暂无分享,去创建一个
Maytal Saar-Tsechansky | Tomer Geva | Harel Lustiger | M. Saar-Tsechansky | Tomer Geva | Harel Lustiger
[1] J. Carbonell,et al. Adaptive Proactive Learning with Cost-Reliability Tradeoff , 2009 .
[2] Siddharth Suri,et al. Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.
[3] John J. Horton,et al. Research Note - Are Online Labor Markets Spot Markets for Tasks? A Field Experiment on the Behavioral Response to Wage Cuts , 2016, Inf. Syst. Res..
[4] Xindong Wu,et al. Learning from crowdsourced labeled data: a survey , 2016, Artificial Intelligence Review.
[5] Naoki Abe,et al. Query Learning Strategies Using Boosting and Bagging , 1998, ICML.
[6] Anirban Dasgupta,et al. Aggregating crowdsourced binary ratings , 2013, WWW.
[7] Panagiotis G. Ipeirotis,et al. Get another label? improving data quality and data mining using multiple, noisy labelers , 2008, KDD.
[8] Michael I. Jordan,et al. Bayesian Bias Mitigation for Crowdsourcing , 2011, NIPS.
[9] Bernardete Ribeiro,et al. Learning from multiple annotators: Distinguishing good from random labelers , 2013, Pattern Recognit. Lett..
[10] William A. Gale,et al. A sequential algorithm for training text classifiers , 1994, SIGIR '94.
[11] Maytal Saar-Tsechansky,et al. Collaborative information acquisition for data-driven decisions , 2013, Machine Learning.
[12] Xindong Wu,et al. Active Learning With Imbalanced Multiple Noisy Labeling , 2015, IEEE Transactions on Cybernetics.
[13] Gerardo Hermosillo,et al. Learning From Crowds , 2010, J. Mach. Learn. Res..
[14] Mausam,et al. To Re(label), or Not To Re(label) , 2014, HCOMP.
[15] Foster J. Provost,et al. Active Sampling for Class Probability Estimation and Ranking , 2004, Machine Learning.
[16] Gabriella Kazai,et al. An analysis of human factors and label accuracy in crowdsourcing relevance judgments , 2013, Information Retrieval.
[17] Mausam,et al. Re-Active Learning: Active Learning with Relabeling , 2016, AAAI.
[18] Peng Dai,et al. POMDP-based control of workflows for crowdsourcing , 2013, Artif. Intell..
[19] Bin Bi,et al. Iterative Learning for Reliable Crowdsourcing Systems , 2012 .
[20] Panagiotis G. Ipeirotis,et al. Repeated labeling using multiple noisy labelers , 2012, Data Mining and Knowledge Discovery.
[21] Andreas Krause,et al. Advances in Neural Information Processing Systems (NIPS) , 2014 .
[22] Anne Jumonville,et al. Encyclopedia of the Sciences of Learning , 2013 .
[23] Lorrie Faith Cranor,et al. Are your participants gaming the system?: screening mechanical turk workers , 2010, CHI.
[24] Jing Wang,et al. Cost-Effective Quality Assurance in Crowd Labeling , 2016, Inf. Syst. Res..
[25] Mausam,et al. Crowdsourcing Control: Moving Beyond Multiple Choice , 2012, UAI.
[26] Duncan J. Watts,et al. Financial incentives and the "performance of crowds" , 2009, HCOMP '09.
[27] Panagiotis G. Ipeirotis,et al. Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.
[28] Devavrat Shah,et al. Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems , 2011, Oper. Res..
[29] John C. Platt,et al. Learning from the Wisdom of Crowds by Minimax Entropy , 2012, NIPS.
[30] Tobias Scheffer,et al. International Conference on Machine Learning (ICML-99) , 1999, Künstliche Intell..
[31] Gabriella Kazai,et al. In Search of Quality in Crowdsourcing for Search Engine Evaluation , 2011, ECIR.
[32] Aniket Kittur,et al. An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets , 2011, ICWSM.
[33] Donghui Feng,et al. Acquiring High Quality Non-Expert Knowledge from On-Demand Workforce , 2009, PWNLP@IJCNLP.
[34] John Shawe-Taylor,et al. Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain , 2011, NIPS.
[35] Abhimanu Kumar. Modeling Annotator Accuracies for Supervised Learning , 2011 .