Towards Automated Evaluation of Learning Resources Inside Repositories

It is known that current Learning Object Repositories adopt strategies for quality assessment of their resources that rely on the impressions of quality given by the members of the repository community. Although this strategy can be considered effective at some extent, the number of resources inside repositories tends to increase more rapidly than the number of evaluations given by this community, thus leaving several resources of the repository without any quality assessment. The present work describes the results of two experiments to automatically generate quality information about learning resources based on their intrinsic features as well as on evaluative metadata (ratings) available about them in MERLOT repository. Preliminary results point out the feasibility of achieving such goal which suggests that this method can be used as a starting point for the pursuit of automatically generation of internal quality information about resources inside repositories.

[1]  Miguel-Ángel Sicilia,et al.  Empirical Analysis of Errors on Human-Generated Learning Objects Metadata , 2009, MTSR.

[2]  Miguel-Ángel Sicilia,et al.  Populating Learning Object Repositories with Hidden Internal Quality Information , 2012, RecSysTEL@EC-TEL.

[3]  Gobinda G. Chowdhury Social Information Retrieval Systems: Emerging Technologies and Applications for Searching the Web Effectively , 2010 .

[4]  T.,et al.  Training Feedforward Networks with the Marquardt Algorithm , 2004 .

[5]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[6]  Ioannis N. Athanasiadis,et al.  Metadata and Semantic Research - 4th International Conference, MTSR 2010, Alcalá de Henares, Spain, October 20-22, 2010. Proceedings , 2010, MTSR.

[7]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[8]  Joshua Evan Blumenstock,et al.  Size matters: word count as a measure of quality on wikipedia , 2008, WWW.

[9]  Erik Duval,et al.  Quantitative Analysis of Learning Object Repositories , 2008, IEEE Transactions on Learning Technologies.

[10]  Rachel Harrison,et al.  Applying Metrics to the Evaluation of Educational Hypermedia Applications , 1998, J. Univers. Comput. Sci..

[11]  Paweł Cichosz,et al.  Assessing the quality of classification models: Performance measures and evaluation procedures , 2011 .

[12]  Elena García Barriocanal,et al.  Preliminary Explorations on the Statistical Profiles of Highly-Rated Learning Objects , 2009, MTSR.

[13]  Erik Duval,et al.  Metadata for social recommendations: storing, sharing and reusing evaluations of learning resources , 2008 .

[14]  Marti A. Hearst,et al.  Statistical profiles of highly-rated web sites , 2002, CHI.

[15]  Erik Duval,et al.  Relevance Ranking Metrics for Learning Objects , 2007, IEEE Transactions on Learning Technologies.

[16]  Les Gasser,et al.  Assessing Information Quality of a Community-Based Encyclopedia , 2005, ICIQ.

[17]  Schubert Foo,et al.  Social Information Retrieval Systems: Emerging Technologies and Applications for Searching the Web Effectively , 2007 .

[18]  Ralf Steinmetz,et al.  Automatic classification of didactic functions of e-learning resources , 2007, ACM Multimedia.

[19]  Miguel-Ángel Sicilia,et al.  On the Search for Intrinsic Quality Metrics of Learning Objects , 2012, MTSR.

[20]  Kevin Leyton-Brown,et al.  Hierarchical Hardness Models for SAT , 2007, CP.

[21]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[22]  Elena García Barriocanal,et al.  Statistical profiles of highly-rated learning objects , 2011, Comput. Educ..

[23]  Juan Manuel Dodero,et al.  Ranking Learning Objects through Integration of Different Quality Indicators , 2010, IEEE Transactions on Learning Technologies.

[24]  Salvador Sánchez-Alonso,et al.  Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository , 2011 .