Predicting item difficulty in a reading comprehension test with an artificial neural network

This article reports the results of using a three-layer backpropagation artificial neural network to predict item difficulty in a reading comprehension test. Two network structures were developed: one with the sigmoid function in the output processing unit and the other without the sigmoid function in the output processing unit. The dataset which consisted of a table of coded test items and corresponding item difficulties was partitioned into a training set and a test set in order to train and test the neural networks. To demonstrate the consistency of the neural networks in predicting item difficulty, the training and testing runs were repeated four times starting with a new set of initial weights. Additionally, the training and testing runs were repeated by switching the training set and the test set. The mean squared error values between the actual and predicted item difficulty demonstrated the consistency of the neural networks in predicting item difficulty for the multiple training and testing runs. Significant correlations were obtained between the actual and predicted item difficulties and the Kruskal-Wallis test indicated no significant difference in the ranks of actual and predicted values.

[1]  S. Embretson,et al.  Component Latent Trait Models for Paragraph Comprehension Tests , 1987 .

[2]  Susan E. Embretson,et al.  EFFECTS OF PROSE COMPLEXITY ON ACHIEVEMENT TEST ITEM DIFFICULTY , 1991 .

[3]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[4]  A. Lapedes,et al.  Nonlinear signal processing using neural networks: Prediction and system modelling , 1987 .

[5]  Isaac I. Bejar,et al.  Subject Matter Experts' Assessment of Item Statistics , 1981 .

[6]  J. Scheuneman,et al.  Using Differential Item Functioning Procedures to Explore Sources of Item Difficulty and Group Performance Characteristics. , 1990 .

[7]  L. Gupta,et al.  Non-linear alignment of neural net outputs for partial shape classification , 1991, Pattern Recognit..

[8]  Lalit Gupta,et al.  Three-layer perceptron based classifiers for the partial shape classification problem , 1994, Pattern Recognit..

[9]  Lalit Gupta,et al.  Prototype selection rules for neural network training , 1992, Pattern Recognit..

[10]  Stephen Grossberg,et al.  Competitive Learning: From Interactive Activation to Adaptive Resonance , 1987, Cogn. Sci..

[11]  Mohammad R. Sayeh,et al.  A neural network approach to robust shape classification , 1990, Pattern Recognit..

[12]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[13]  Ramesh Sharda,et al.  Connectionist approach to time series prediction: an empirical test , 1992, J. Intell. Manuf..

[14]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[15]  W. T. Illingworth,et al.  Practical guide to neural nets , 1991 .

[16]  Priscilla A. Drum,et al.  The Effects of Surface Structure Variables on Performance in Reading Comprehension Tests. , 1981 .

[17]  R. Hecht-Nielsen Counterpropagation networks. , 1987, Applied optics.