Evaluating Recommender Algorithms for Learning Using Crowdsourcing

Keeping focused on a certain goal or topic when learning with resources found on the Web is a challenge. Creating a hierarchical learning goal structure with activities and sub-activities can help the learner to keep on track. Moreover, providing useful recommendations to such activities can further support the learner. However, recommendations need to be relevant to the specific goal or activity the learner is currently working on, as well as being novel and diverse to the learner. Such user-centric metrics like novelty and diversity are best measured by asking the users themselves. Nonetheless, conducting user experiments are notoriously time-consuming and access to an adequate amount of users is often very limited. Crowd sourcing offers a means to evaluate TEL recommender algorithms by reaching out to sufficient participants in a shorter time-frame and with less effort. In this paper, a concept for evaluating TEL recommender algorithms using crowd sourcing is presented as well as a repeated proof-of-concept evaluation experiment of a TEL graph-based recommender algorithm AScore that exploits hierarchical activity structures. Results from both experiments support the postulated hypotheses, thereby showing that crowd sourcing can be successfully applied to evaluate TEL recommender algorithms.

[1]  Renato Domínguez García,et al.  CROKODIL - A Platform for Collaborative Resource-Based Learning , 2011, EC-TEL.

[2]  Ulrik Schroeder,et al.  Tag-based collaborative filtering recommendation in personal learning environments , 2013, IEEE Transactions on Learning Technologies.

[3]  Jürgen Buder,et al.  Learning with personalized recommender systems: A psychological view , 2012, Comput. Hum. Behav..

[4]  Ricardo Baeza-Yates,et al.  Design and Implementation of Relevance Assessments Using Crowdsourcing , 2011, ECIR.

[5]  Maryam Habibi,et al.  Using Crowdsourcing to Compare Document Recommendation Strategies for Conversations , 2012, RUE@RecSys.

[6]  Gerhard Friedrich,et al.  Recommender Systems - An Introduction , 2010 .

[7]  Stefan Trausan-Matu,et al.  Bringing the Social Semantic Web to the Personal Learning Environment , 2010, 2010 10th IEEE International Conference on Advanced Learning Technologies.

[8]  Andreas Hotho,et al.  Information Retrieval in Folksonomies: Search and Ranking , 2006, ESWC.

[9]  Gabriella Kazai,et al.  In Search of Quality in Crowdsourcing for Search Engine Evaluation , 2011, ECIR.

[10]  Renato Domínguez García,et al.  Aufgabenprototypen zur Unterstützung der Selbststeuerung im Ressourcenbasierten Lernen , 2011, DeLFI.

[11]  Ben Carterette,et al.  Using preference judgments for novel document retrieval , 2012, SIGIR '12.

[12]  Christoph Rensing,et al.  Semantische Graph-basierte Empfehlungen zur Unterstützung des Ressourcen-basierten Lernens , 2013 .

[13]  Christoph Rensing,et al.  Investigating Crowdsourcing as an Evaluation Method for TEL Recommenders , 2013, ECTEL-meets-ECSCW.

[14]  Bernardo Pereira Nunes,et al.  Content-Based Movie Recommendation within Learning Contexts , 2013, 2013 IEEE 13th International Conference on Advanced Learning Technologies.

[15]  Alfred Kobsa,et al.  A pragmatic procedure to support the user-centric evaluation of recommender systems , 2011, RecSys '11.

[16]  Renato Domínguez García,et al.  Exploiting Semantic Information for Graph-Based Recommendations of Learning Resources , 2012, EC-TEL.

[17]  Guy Shani,et al.  Evaluating Recommendation Systems , 2011, Recommender Systems Handbook.