Modelling the Process of Learning Analytics Using a Reinforcement Learning Framework

Learning analytics (LA) is a relatively new research field concerned with analysing data collected from various sources to provide insights into enhancing learning and teaching. A complete LA process typically involves five distinct, yet interrelated, stages – namely capture, report, predict, act and refine – which form a sequential decision process. So far, research efforts have focused mostly on studying independent research questions involved in individual stages. It is therefore necessary to have a formal framework to quantify and guide the whole LA process. In this paper, we discuss how reinforcement learning (RL), a subfield of machine learning, can be employed to address the sequential decision problem involved in the LA process. In particular, we integrate the LA stages with an RL framework consisting of state space, action space, transition function and reward function and illustrate this with examples of how the three most studied optimality criteria in RL – finite horizon, discounted infinite horizon and the average reward model – can be applied to the LA process. The underlying assumptions, advantages and issues in the proposed RL framework are also discussed.