Deep Neural Networks and How They Apply to Sequential Education Data

Modern deep neural networks have achieved impressive results in a variety of automated tasks, such as text generation, grammar learning, and speech recognition. This paper discusses how education research might leverage recurrent neural network architectures in two small case studies. Specifically, we train a two-layer Long Short-Term Memory (LSTM) network on two distinct forms of education data: (1) essays written by students in a summative environment, and (2) MOOC clickstream data. Without any features specified beforehand, the network attempts to learn the underlying structure of the input sequences. After training, the model can be used generatively to produce new sequences with the same underlying patterns exhibited by the input distribution. These early explorations demonstrate the potential for applying deep learning techniques to large education data sets.