A Cheap MT-Evaluation Method Based on Internet Searches

In this paper, we first argue that human translation references used to calculate MT evaluation scores such as BLEU need to be revised. This revision is time and resourceconsuming, so we propose, instead, using an inexpensive MT evaluation method which detects and counts examples of characteristic MT output, referred to herein as instances of machine-translationness, by performing Internet searches. The goal is to obtain a sketch of the quality of the output, which, on occasions, is sufficient for the purpose of the evaluation. Moreover, this evaluation method can be adapted to detect drawbacks of the system, in order to develop a new version, and can also be helpful for post-editing machinetranslated documents.