Further Experiments in Language Translation: A Second Evaluation of the Readability of Computer Translations,

Language translation by computer has been proposed as a solution to the backlog of training and operational manuals awaiting translation by more conventional means. This study reports one of a series of experiments to assess the quality of translations produced by human translators and computers. The type of material under study was technical text (i.e., maintenance manuals) and the translation was from English to Vietnamese. Utility or readability of the translations was assessed by reading comprehension tests, the cloze procedure (in which readers filled in blanks where words had been systematically deleted) and a rating scale for judging clarity. Time to perform each of these tasks was also measured. The subjects were 141 Vietnamese Navy officer candidates and a control group of 57 U.S. Navy officer candidates. A 500-word passage, from a U.S. Navy casualty control instruction, was translated by computer into a rough (un-edited) and a finished (post-edited) version; also, highly competent human translators prepared a Vietnamese text. Some Vietnamese subjects served as controls and took all tests based on the English, or untranslated, version. Major conclusions were: (1) Translations produced by highly qualified humans were consistently more comprehensible than those produced by computer, whether edited or un-edited; post-edited versions of computer produced text were more comprehensible than unedited ones; most differences were not statistically significant; (2) Vietnamese Navy officer candidates were able to read text in English as well as its best Vietnamese version and their test scores were about as high as those of American control subjects. Reading speed was not affected by mode of translation.