The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.
[1]
Peter W. Foltz,et al.
An introduction to latent semantic analysis
,
1998
.
[2]
Susan T. Dumais,et al.
The latent semantic analysis theory of knowledge
,
1997
.
[3]
Bob Rehder,et al.
Using latent semantic analysis to assess knowledge: Some technical considerations
,
1998
.
[4]
T. Landauer,et al.
Indexing by Latent Semantic Analysis
,
1990
.
[5]
Susan T. Dumais,et al.
Using Linear Algebra for Intelligent Information Retrieval
,
1995,
SIAM Rev..
[6]
Peter W. Foltz,et al.
Learning from text: Matching readers and texts by latent semantic analysis
,
1998
.
[7]
Peter W. Foltz,et al.
The Measurement of Textual Coherence with Latent Semantic Analysis.
,
1998
.