Modern deep neural networks have achieved impressive results in a variety of automated tasks, such as text generation, grammar learning, and speech recognition. This paper discusses how education research might leverage recurrent neural network architectures in two small case studies. Specifically, we train a two-layer Long Short-Term Memory (LSTM) network on two distinct forms of education data: (1) essays written by students in a summative environment, and (2) MOOC clickstream data. Without any features specified beforehand, the network attempts to learn the underlying structure of the input sequences. After training, the model can be used generatively to produce new sequences with the same underlying patterns exhibited by the input distribution. These early explorations demonstrate the potential for applying deep learning techniques to large education data sets.
[1]
Mark D. Shermis,et al.
State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration
,
2014
.
[2]
Carolyn Penstein Rosé,et al.
Identifying Latent Study Habits by Mining Learner Behavior Patterns in Massive Open Online Courses
,
2014,
CIKM.
[3]
John R. Anderson,et al.
Knowledge tracing: Modeling the acquisition of procedural knowledge
,
2005,
User Modeling and User-Adapted Interaction.
[4]
Alex Graves,et al.
Generating Sequences With Recurrent Neural Networks
,
2013,
ArXiv.
[5]
Jürgen Schmidhuber,et al.
Long Short-Term Memory
,
1997,
Neural Computation.
[6]
Leonidas J. Guibas,et al.
Deep Knowledge Tracing
,
2015,
NIPS.