An Assessment of the Statistical and Editorial Output of Text Analysis Programs
暂无分享,去创建一个
Abstract : This study assessed commercially available text analysis programs for their accuracy and usefulness to novice writers. Two topics were examined: the reliability and accuracy of scores and indicators, and the level and validity of the editorial comments. Six text analysis programs were assessed by comparing their output for two previously published Navy technical publications. The analyzers produced different word and sentence counts, and standard readability scales, suggesting that each uses different algorithms to analyze text material. The analyzers' statistical measures may be sensitive to the complexity and format of text samples, but not to their size. The analyzers varied on the number and level of comments, the validity of the comments, and the types of problems they detect. The text analyzers are able to detect problems of usage, sentence length, and other low-level mistakes. They are less able to detect problems involving the relationship between parts of sentences. Specific problems created by the unique characteristics of technical writing are identified (e.g., technical terminology, passive sentence forms, numbers). The analyzers assessed in this study are suitable to supplement, but not replace, traditional editing. If accompanied by additional guidance, they might be used as tutorial aids.
[1] P. David Pearson. Handbook of reading research. , 1990 .
[2] R. Gunning. The Technique of Clear Writing. , 1968 .
[3] M. Coleman,et al. A computer readability formula designed for machine scoring. , 1975 .
[4] R. Flesch. A new readability yardstick. , 1948, The Journal of applied psychology.