We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules.
[1]
George R. Doddington,et al.
Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics
,
2002
.
[2]
Salim Roukos,et al.
Bleu: a Method for Automatic Evaluation of Machine Translation
,
2002,
ACL.
[3]
M. King,et al.
FEMTI: creating and using a framework for MT evaluation
,
2003,
MTSUMMIT.
[4]
Joseph P. Turian,et al.
Evaluation of machine translation and its evaluation
,
2003,
MTSUMMIT.
[5]
Daniel Marcu,et al.
Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences
,
2003,
NAACL.
[6]
Alon Lavie,et al.
The significance of recall in automatic metrics for MT evaluation
,
2004,
AMTA.