Automatic assessment of student reading comprehension from short summaries

This paper describes our research on automatically scoring students’ summaries for comprehension using not only text specific quantitative and qualitative features, but also more complex features based on the computational indices of cohesion available via Coh-Metrix and on Information Content (IC, a measure of text informativeness). We assessed whether human rated summary scores could be predicted by indices of text complexity and IC. The IC metric of the summaries was a better predictor of human scores than word count or any of the CohMetrix text complexity dimensions. This finding may justify the implementation of IC in future automated summary rating tools to rate short summaries.