暂无分享,去创建一个
[1] J. Fleiss. Measuring nominal scale agreement among many raters. , 1971 .
[2] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[3] John Yearwood,et al. Automated opinion detection: Implications of the level of agreement between human raters , 2010, Inf. Process. Manag..
[4] José Luis Vicedo González,et al. TREC: Experiment and evaluation in information retrieval , 2007, J. Assoc. Inf. Sci. Technol..
[5] York Sure-Vetter,et al. Science models as value-added services for scholarly information systems , 2011, Scientometrics.
[6] Marcia J. Bates,et al. Where should the person stop and the information search interface start? , 1990, Inf. Process. Manag..
[7] Peter Mutschke,et al. Autorennetzwerke: Netzwerkanalyse als Mehrwertdienst für Informationssysteme , 2004, ISI.
[8] Philipp Mayr,et al. Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking , 2007, ArXiv.
[9] Mark Sanderson,et al. Relevance judgments between TREC and Non-TREC assessors , 2008, SIGIR '08.
[10] David C. Blair. Information retrieval and the philosophy of language , 2003, Annu. Rev. Inf. Sci. Technol..
[11] Howard D. White. ‘Bradfordizing’ search output: how it would help online users , 1981 .
[12] Christopher D. Manning,et al. Introduction to Information Retrieval , 2010, J. Assoc. Inf. Sci. Technol..
[13] Vivien Petras,et al. Translating Dialects in Search: Mapping between Specialized Languages of Discourse and Documentary Languages , 2006 .
[14] Philipp Mayr. Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in digitalen Bibliotheken , 2009 .