Ranking Learning Objects through Integration of Different Quality Indicators

The solutions used to-date for recommending learning objects have proved unsatisfactory. In an attempt to improve the situation, this document highlights the insufficiencies of the existing approaches, and identifies quality indicators that might be used to provide information on which materials to recommend to users. Next, a synthesized quality indicator that can facilitate the ranking of learning objects, according to their overall quality, is proposed. In this way, explicit evaluations carried out by users or experts will be used, along with the usage data; thus, completing the information on which the recommendation is based. Taking a set of learning objects from the Merlot repository, we analyzed the relationships that exist between the different quality indicators to form an overall quality indicator that can be calculated automatically, guaranteeing that all resources will be rated.

[1]  Jerry Zhigang Li,et al.  Evaluating Learning Objects Across Boundaries: The Semantics of Localization , 2006, Int. J. Distance Educ. Technol..

[2]  S. Downes Models for Sustainable Open Educational Resources , 2007 .

[3]  Jean-Luc Marichal,et al.  An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria , 2000, IEEE Trans. Fuzzy Syst..

[4]  Juan Manuel Dodero,et al.  A Preliminary Analysis of Software Engineering Metrics-based Criteria for the Evaluation of Learning Objects Reusability , 2009, Int. J. Emerg. Technol. Learn..

[5]  Robert L. Glass,et al.  A structure-based critique of contemporary computing research , 1995, J. Syst. Softw..

[6]  John C. Nesbit,et al.  Learning Object Evaluation: Computer-Mediated Collaboration And Inter-Rater Reliability , 2003 .

[7]  Kevin Palmer,et al.  Learning Object Reusability - Motivation, Production and Use , 2004 .

[8]  Robin Kay,et al.  Evaluating the learning in learning objects , 2007 .

[9]  John C. Nesbit,et al.  Collaborative evaluation of learning objects , 2012 .

[10]  Jerry Li,et al.  Web-Based Tools for Learning Object Evaluation , 2004 .

[11]  Steve Fox,et al.  Evaluating implicit measures to improve web search , 2005, TOIS.

[12]  Robin Kay,et al.  Assessing learning, quality and engagement in learning objects: the Learning Object Evaluation Scale for Students (LOES-S) , 2009 .

[13]  Ralf Steinmetz,et al.  Improving Retrieval of Reusable Learning Resources by Estimating Adaptation Effort , 2007, LODE.

[14]  Timothy K. Shih,et al.  Weighting and Ranking the E-learning Resources , 2009, 2009 Ninth IEEE International Conference on Advanced Learning Technologies.

[15]  Marek Hatala,et al.  Edusource: Canada's Learning Object Repository Network , 2004 .

[16]  Rob Koper,et al.  Comparison of educational tagging systems – any chances of interplay? , 2009 .

[17]  Tom Boyle,et al.  Design principles for authoring dynamic, reusable learning objects , 2003, ASCILITE.

[18]  Richard G. Baraniuk,et al.  Peer Review Anew: Three Principles and a Case Study in Postpublication Quality Assurance , 2008, Proceedings of the IEEE.

[19]  Moni Naor,et al.  Rank aggregation methods for the Web , 2001, WWW '01.

[20]  Erik Duval,et al.  Relevance Ranking Metrics for Learning Objects , 2007, IEEE Transactions on Learning Technologies.

[21]  Elena García Barriocanal,et al.  Preliminary Explorations on the Statistical Profiles of Highly-Rated Learning Objects , 2009, MTSR.

[22]  Erik Duval,et al.  Metadata for social recommendations: storing, sharing and reusing evaluations of learning resources , 2008 .

[23]  Mark Claypool,et al.  Implicit interest indicators , 2001, IUI '01.

[24]  Erik Duval LearnRank: Towards a real quality measure for learning , 2006 .

[25]  Juan Manuel Dodero,et al.  Reusability Evaluation of Learning Objects Stored in Open Repositories Based on Their Metadata , 2009, MTSR.

[26]  Erik Duval,et al.  A learning object manifesto - Towards share and reuse on a global scale , 2005 .

[27]  Yavuz Akpinar,et al.  Validation of a Learning Object Review Instrument: Relationship between Ratings of Learning Objects and Actual Learning Outcomes , 2008 .

[28]  Miguel-Angel Sicilia,et al.  On the Concepts of Usability and Reusability of Learning Objects , 2003 .

[29]  Mimi Recker,et al.  A Non-authoritative Educational Metadata Ontology for Filtering and Recommending Learning Objects , 2001, Interact. Learn. Environ..

[30]  Erik Duval,et al.  Measuring Learning Object Reuse , 2008, EC-TEL.

[31]  Eugenijus Kurilovas,et al.  Learning Objects and Virtual Learning Environments Technical Evaluation Criteria. , 2009 .

[32]  Riina Vuorikari,et al.  An Overview of Learning Object Repositories , 2009, Database Technologies: Concepts, Methodologies, Tools, and Applications.

[33]  Vive Kumar,et al.  Rating learning object quality with distributed Bayesian belief networks: the why and the how , 2005, Fifth IEEE International Conference on Advanced Learning Technologies (ICALT'05).

[34]  Natasha Boskic,et al.  FACULTY ASSESSMENT OF THE QUALITY AND REUSABILITY OF LEARNING OBJECTS , 2003 .

[35]  Ron Oliver,et al.  FACTORS INFLUENCING THE DISCOVERY AND REUSABILITY OF DIGITAL RESOURCES FOR TEACHING AND LEARNING , 2003 .

[36]  Kate Han Quality rating of learning objects using Bayesian Belief Networks , 2004 .

[37]  Riina Vuorikari,et al.  What If Annotations Were Reusable: A Preliminary Discussion , 2009, ICWL.

[38]  Erik Duval,et al.  Quality Metrics for Learning Object Metadata , 2006 .