Inter-Annotator Agreement in Sentiment Analysis: Machine Learning Perspective
暂无分享,去创建一个
[1] Klaus Krippendorff,et al. Answering the Call for a Standard Reliability Measure for Coding Data , 2007 .
[2] Marco Guerini,et al. Depeche Mood: a Lexicon for Emotion Analysis from Crowd Annotated News , 2014, ACL.
[3] J. Azé,et al. Patient's rationale: Patient Knowledge retrieval from health forums , 2014, eTELEMED 2014.
[4] Ron Artstein,et al. Survey Article: Inter-Coder Agreement for Computational Linguistics , 2008, CL.
[5] P. Ekman. An argument for basic emotions , 1992 .
[6] Victoria Bobicev,et al. Confused and Thankful: Multi-label Sentiment Classification of Health Forums , 2017, Canadian Conference on AI.
[7] Alan F. Smeaton,et al. A study of inter-annotator agreement for opinion retrieval , 2009, SIGIR.
[8] Victoria Bobicev,et al. What Sentiments Can Be Found in Medical Forums? , 2013, RANLP.
[9] Victoria Bobicev,et al. What Goes Around Comes Around: Learning Sentiments in Online Medical Forums , 2015, Cognitive Computation.
[10] Stefanie Nowak,et al. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation , 2010, MIR '10.
[11] Mark A. Hall,et al. Correlation-based Feature Selection for Machine Learning , 2003 .
[12] Mike Thelwall,et al. Sentiment strength detection for the social web , 2012, J. Assoc. Inf. Sci. Technol..
[13] Andrea Esuli,et al. SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining , 2006, LREC.
[14] Douglas W. Oard,et al. Investigating multi-label classification for human values , 2010, ASIST.
[15] ThelwallMike,et al. Sentiment strength detection for the social web , 2012 .