Replication and Evaluation in CALL

With this thematic issue on replication studies in CALL, we would like to draw attention to the importance of looking back both in CALL research and the development of learning technology. Replicating CALL research and evaluating (commercialized) language-learning tools, software, systems, and environments afford engagement with past findings, outcomes, and results and, through this engagement, yield new insight into current and future language learning in technology-rich contexts. Caws and Heift (in press) argue that, although research and evaluation can be distinguished, in CALL, this distinction is not always clear cut. For example, researchers rely on the evaluation of learning outcomes, task designs, facets of complex learning processes in their studies; and software evaluators ground their reviews in research findings and often apply methods akin to those in applied-linguistics research. In this editorial, we will first introduce the theme of this issue – replication studies – and then announce CJ’s new conceptualization of its section on evaluation – the Learning Technology Reviews (formerly known as software reviews).