Interactive Unknowns Recommendation in E-Learning Systems

The arise of E-learning systems has led to an anytime-anywhere-learning environment for everyone by providing various online courses and tests. However, due to the lack of teacher-student interaction, such ubiquitous learning is generally not as effective as offline classes. In traditional offline courses, teachers facilitate real-time interaction to teach students in accordance with personal aptitude from students' feedback in classes. Without the interruption of instructors, it is difficult for users to be aware of personal unknowns. In this paper, we address an important issue on the exploration of 'user unknowns' from an interactive question-answering process in E-learning systems. A novel interactive learning system, called CagMab, is devised to interactively recommend questions with a round-by-round strategy, which contributes to applications such as a conversational bot for self-evaluation. The flow enables users to discover their weakness and further helps them to progress. In fact, despite its importance, discovering personal unknowns remains a challenging problem in E-learning systems. Even though formulating the problem with the multi-armed bandit framework provides a solution, it often leads to suboptimal results for interactive unknowns recommendation as it simply relies on the contextual features of answered questions. Note that each question is associated with concepts and similar concepts are likely to be linked manually or systematically, which naturally forms the concept graphs. Mining the rich relationships among users, questions and concepts could be potentially helpful in providing better unknowns recommendation. To this end, in this paper, we develop a novel interactive learning framework by borrowing strengths from concept-aware graph embedding for learning user unknowns. Our experimental studies on real data show that the proposed framework can effectively discover user unknowns in an interactive fashion for the recommendation in E-learning systems.

[1]  Peter Auer,et al.  Using Confidence Bounds for Exploitation-Exploration Trade-offs , 2003, J. Mach. Learn. Res..

[2]  D. Manjunath,et al.  Optimal recommendation to users that react: Online learning for a class of POMDPs , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).

[3]  Mohammed A. Otair,et al.  Expert personalized e-learning recommender system , 2005 .

[4]  Julita Vassileva,et al.  Recommendations in Online Discussion Forums for E-Learning Systems , 2010, IEEE Transactions on Learning Technologies.

[5]  J. Gittins Bandit processes and dynamic allocation indices , 1979 .

[6]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[7]  Quanquan Gu,et al.  Contextual Bandits in a Collaborative Environment , 2016, SIGIR.

[8]  Antonio Fernández-Caballero,et al.  Domain Ontology for Personalized E-Learning in Educational Systems , 2006 .

[9]  Wei Li,et al.  Exploitation and exploration in a performance based contextual advertising system , 2010, KDD.

[10]  Jie Lu,et al.  A Personalized e-Learning material Recommender System , 2004 .

[11]  Xiao Huang,et al.  Exploring Expert Cognition for Attributed Network Embedding , 2018, WSDM.

[12]  Mingzhe Wang,et al.  LINE: Large-scale Information Network Embedding , 2015, WWW.

[13]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[14]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[15]  Luis González Abril,et al.  Creating adaptive learning paths using Ant Colony Optimization and Bayesian Networks , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[16]  Shuai Li,et al.  Collaborative Filtering Bandits , 2015, SIGIR.

[17]  Huazheng Wang,et al.  Factorization Bandits for Interactive Recommendation , 2017, AAAI.

[18]  Claudio Gentile,et al.  A Gang of Bandits , 2013, NIPS.

[19]  Samir Roy,et al.  Online Recommendation of Learning Path for an E-Learner under Virtual University , 2013, ICDCIT.

[20]  Csaba Szepesvári,et al.  Improved Algorithms for Linear Stochastic Bandits , 2011, NIPS.

[21]  Wei Chu,et al.  A contextual-bandit approach to personalized news article recommendation , 2010, WWW '10.

[22]  Ruslan Salakhutdinov,et al.  Probabilistic Matrix Factorization , 2007, NIPS.

[23]  Shipra Agrawal,et al.  Thompson Sampling for Contextual Bandits with Linear Payoffs , 2012, ICML.