The Timeliness Deviation: A novel Approach to Evaluate Educational Recommender Systems for Closed-Courses

The decision on what item to learn next in a course can be supported by a recommender system (RS), which aims at making the learning process more efficient and effective. However, learners and learning activities frequently change over time. The question is: how are timely appropriate recommendations of learning resources actually evaluated and how can they be compared? Researchers have found that, in addition to a standardized dataset definition, there is also a lack of standardized definitions of evaluation procedures for RS in the area of Technology Enhanced Learning. This paper argues that, in a closed-course setting, a time-dependent split into the training set and test set is more appropriate than the usual cross-validation to evaluate the Top-N recommended learning resources at various points in time. Moreover, a new measure is introduced to determine the timeliness deviation between the point in time of an item recommendation and the point in time of the actual access by the user. Different recommender algorithms, including two novel ones, are evaluated with the time-dependent evaluation framework and the results, as well as the appropriateness of the framework, are discussed.

[1]  Balaraman Ravindran,et al.  Personalized Intelligent Tutoring System Using Reinforcement Learning , 2011, FLAIRS.

[2]  James R. Sanders,et al.  تقويم البرنامج : طرق بديلة و إرشادات عملية = Program evaluation Alternative Approaches and Practical Guidelines , 1987 .

[3]  Christoph Hermann,et al.  Time-Based Recommendations for Lecture Materials , 2010 .

[4]  Stephan Weibelzahl,et al.  Evaluation of Adaptive Systems , 2001, User Modeling.

[5]  Harold Boley,et al.  Collaborative filtering and inference rules for context-aware learning object recommendation , 2005, Interact. Technol. Smart Educ..

[6]  Neil J. Hurley,et al.  Novelty and Diversity in Top-N Recommendation -- Analysis and Evaluation , 2011, TOIT.

[7]  Charles L. A. Clarke,et al.  Novelty and diversity in information retrieval evaluation , 2008, SIGIR '08.

[8]  Roberto Turrin,et al.  Performance of recommender algorithms on top-n recommendation tasks , 2010, RecSys '10.

[9]  Erik Duval,et al.  Dataset-driven research for improving recommender systems for learning , 2011, LAK.

[10]  Jonathan L. Herlocker,et al.  Evaluating collaborative filtering recommender systems , 2004, TOIS.

[11]  Iván Cantador,et al.  Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols , 2013, User Modeling and User-Adapted Interaction.

[12]  Alejandro Bellogín,et al.  Precision-oriented evaluation of recommender systems: an algorithmic comparison , 2011, RecSys '11.

[13]  Erik Duval,et al.  Recommender Systems for Technology Enhanced Learning ( RecSysTEL 2010 ) Issues and Considerations regarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning , 2010 .

[14]  Wei Lu,et al.  Improved Slope One Algorithm Based on Time Weight , 2013 .

[15]  Roy Rada,et al.  Efficiency and effectiveness in computer-supported peer-peer learning , 1998 .

[16]  Hendrik Drachsler,et al.  Identifying the Goal, User model and Conditions of Recommender Systems for Formal and Informal Learning , 2009, J. Digit. Inf..

[17]  James Bennett,et al.  The Netflix Prize , 2007 .

[18]  Guy Shani,et al.  A Survey of Accuracy Evaluation Metrics of Recommendation Tasks , 2009, J. Mach. Learn. Res..

[19]  K. Margaritis,et al.  Analysis of Recommender Systems’ Algorithms , 2003 .

[20]  Erik Duval,et al.  Context-Aware Recommender Systems for Learning: A Survey and Future Challenges , 2012, IEEE Transactions on Learning Technologies.

[21]  Balaraman Ravindran,et al.  Intelligent Tutoring Systems using Reinforcement Learning to teach Autistic Students , 2007, HOIT.

[22]  Dhaval Vyas,et al.  Home Informatics and Telematics: ICT for the Next Billion , 2008 .

[23]  Christoph Rensing,et al.  Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey , 2015, IEEE Transactions on Learning Technologies.

[24]  Suju Rajan,et al.  Beyond clicks: dwell time for personalization , 2014, RecSys '14.

[25]  Lars Schmidt-Thieme,et al.  Adaptive Content Sequencing without Domain Information , 2014, CSEDU.

[26]  Alan Said,et al.  Comparative recommender system evaluation: benchmarking recommendation frameworks , 2014, RecSys '14.

[27]  Ulrik Schroeder,et al.  Tag-based collaborative filtering recommendation in personal learning environments , 2013, IEEE Transactions on Learning Technologies.

[28]  Félix Hernández-del-Olmo,et al.  Evaluation of recommender systems: A new approach , 2008, Expert Syst. Appl..

[29]  Bradley N. Miller,et al.  MovieLens unplugged: experiences with an occasionally connected recommender system , 2003, IUI '03.

[30]  Agathe Merceron,et al.  Branched Learning Paths for the Recommendation of Personalized Sequences of Course Items , 2018, DeLFI Workshops.

[31]  Daniel Lemire,et al.  Slope One Predictors for Online Rating-Based Collaborative Filtering , 2007, SDM.

[32]  John Riedl,et al.  Item-based collaborative filtering recommendation algorithms , 2001, WWW '01.

[33]  Christopher Krauß,et al.  Time-dependent recommender systems for the prediction of appropriate learning objects , 2018 .

[34]  Katrien Verbert,et al.  Recommender Systems for Technology Enhanced Learning , 2014, Springer New York.