ROUGE: A Package for Automatic Evaluation of Summaries
暂无分享,去创建一个
[1] I. Dan Melamed,et al. Automatic Evaluation and Uniform Filter Cascades for Inducing N-Best Translation Lexicons , 1995, VLC@ACL.
[2] Debashis Kushary,et al. Bootstrap Methods and Their Application , 2000, Technometrics.
[3] Wai Lam,et al. Meta-evaluation of Summaries in a Cross-lingual Environment using Content-based Metrics , 2002, COLING.
[4] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[5] M. Maybury,et al. Automatic Summarization , 2002, Computational Linguistics.
[6] Paul Over,et al. Intrinsic Evaluation of Generic News Text Summarization Systems , 2003 .
[7] Eduard H. Hovy,et al. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics , 2003, NAACL.
[8] I. Dan Melamed,et al. Precision and Recall of Machine Translation , 2003, NAACL.
[9] Chin-Yew Lin,et al. Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics , 2004, ACL.
[10] Chin-Yew Lin,et al. Looking for a Few Good Metrics: ROUGE and its Evaluation , 2004 .