Quality rating of learning objects using Bayesian Belief Networks

The unceasing growth of the Internet has led to new modes of learning in which students routinely interact online with instructors, other students and more frequently, digital resources. Much recent research has focused on building infrastructure for these activities, especially to facilitate searching, filtering and recommending online resources known as learning objects [I]. Although newly defined standards for learning object metadata [2] are expected to greatly improve searching and filtering capabilities, students, teachers, and instructional developers may still be faced with choosing from many pages of result listings returned from a single learning object query. The listed objects tend to vary widely in quality. Without proper recommendation, the learning object enquirers not only grope in the dark in front of overwhelming information, but also easily fall for poorly designed and developed instructional materials, wasting time and effort. Hence, there is a clear need for quality evaluations prior to recommendation that can be communicated in a coherent, standardised format to measure the quality of learning objects. Consequently, we need certain criteria to obtain this evaluation. In the last few years, a number of quality rating standards have been developed. As different evaluation instruments are deployed in learning object repositories serving specialised communities of users, what methods can be applied for translating evaluative data across instruments to allow this data to be shared among different repositories? How can the large number of possible explicit and implicit measures of preference and quality be combined to recommend objects to users?

[1]  John Riedl,et al.  GroupLens: an open architecture for collaborative filtering of netnews , 1994, CSCW '94.

[2]  John E. Hunter,et al.  Methods of Meta-Analysis: Correcting Error and Bias in Research Findings , 1991 .

[3]  L. Rabiner,et al.  An introduction to hidden Markov models , 1986, IEEE ASSP Magazine.

[4]  Marc J. Rosenberg,et al.  E-Learning: Strategies for Delivering Knowledge in the Digital Age , 2000 .

[5]  Jan F. Kreider,et al.  Unified prediction and diagnosis in engineering systems by means of distributed belief networks , 1999 .

[6]  John C. Nesbit,et al.  A Convergent Participation Model for Evaluation of Learning Objects , 2002 .

[7]  Robert A. Reiser,et al.  Evaluating instructional software: A review and critique of current methods , 1994 .

[8]  Prem Melville and Raymond J. Mooney and Ramadass Nagarajan Content-Boosted Collaborative Filtering , 2001 .

[9]  Laurence F Johnson,et al.  Elusive Vision: Challenges Impeding the Learning Object Economy , 2003 .

[10]  Michael J. Hannafin,et al.  Teaching and learning in digital environments: The resurgence of resource-based learning , 2001 .

[11]  Yoav Shoham,et al.  Fab: content-based, collaborative recommendation , 1997, CACM.

[12]  Wray L. Buntine Operations for Learning with Graphical Models , 1994, J. Artif. Intell. Res..

[13]  Mimi Recker,et al.  What do you recommend? Implementation and analyses of collaborative information filtering of web resources for education , 2003 .

[14]  Nancy E. Betz,et al.  Tests and assessment , 1985 .

[15]  John C. Nesbit,et al.  Learning Object Evaluation: Computer-Mediated Collaboration And Inter-Rater Reliability , 2003 .

[16]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[17]  Donald A. Berry,et al.  Statistics: A Bayesian Perspective , 1995 .

[18]  L. Crocker,et al.  Introduction to Classical and Modern Test Theory , 1986 .

[19]  Douglas B. Terry,et al.  Using collaborative filtering to weave an information tapestry , 1992, CACM.