A genetic fuzzy expert system for automatic question classification in a competitive learning environment

Intelligent tutoring systems are efficient tools to automatically adapt the learning process to the student's progress and needs. One of the possible adaptations is to apply an adaptive question sequencing system, which matches the difficulty of the questions to the student's knowledge level. In this context, it is important to correctly classify the questions to be presented to students according to their difficulty level. Many systems have been developed for estimating the difficulty of questions. However the variety in the application environments makes difficult to apply the existing solutions directly to other applications. Therefore, a specific solution has been designed in order to determine the difficulty level of open questions in an automatic and objective way. This solution can be applied to activities with special temporal and running features, as the contests developed through QUESTOURnament, which is a tool integrated into the e-learning platform Moodle. The proposed solution is a fuzzy expert system that uses a genetic algorithm in order to characterize each difficulty level. From the output of the algorithm, it defines the fuzzy rules that are used to classify the questions. Data registered from a competitive activity in a Telecommunications Engineering course have been used in order to validate the system against a group of experts. Results show that the system performs successfully. Therefore, it can be concluded that the system is able to do the questions classification labour in a competitive learning environment.

[1]  Jirí Vomlel,et al.  Building adaptive tests using Bayesian networks , 2004, Kybernetika.

[2]  Vicente Moret-Bonillo,et al.  An expert system to achieve fuzzy interpretations of validation data , 2008, Expert Syst. Appl..

[3]  Àngela Nebot,et al.  Genetic fuzzy system for predictive and decision support modelling in e-learning , 2010, International Conference on Fuzzy Systems.

[4]  Julian Williams,et al.  Teachers' pedagogical content knowledge: graphs, from a cognitivist to a situated perspective , 2002 .

[5]  Luisa M. Regueras,et al.  Estimating the Difficulty Level of the Challenges Proposed in a Competitive e-Learning Environment , 2010, IEA/AIE.

[6]  Richard H. Hall,et al.  Using games to teach statics calculation procedures: Application and assessment , 2005, Comput. Appl. Eng. Educ..

[7]  Hahn-Ming Lee,et al.  Personalized e-learning system using Item Response Theory , 2005, Comput. Educ..

[8]  Luisa M. Regueras,et al.  An analysis of the research on adaptive learning: the next generation of e-learning , 2008 .

[9]  María José del Jesús,et al.  Evolutionary algorithms for subgroup discovery in e-learning: A practical application using Moodle data , 2009, Expert Syst. Appl..

[10]  Bin Shyan Jong,et al.  Applying the Adaptive Learning Material Producing Strategy to Group Learning , 2006, Edutainment.

[11]  van der Linden,et al.  A hierarchical framework for modeling speed and accuracy on test items , 2007 .

[12]  Luis Enrique Sucar,et al.  A Probabilistic Relational Student Model for Virtual Laboratories , 2007, User Modeling.

[13]  Osman Balci,et al.  Validation of Expert System Performance , 1986 .

[14]  John D Mattar,et al.  Investigation of the validity of the Angoff standard setting procedure for multiple-choice items. , 2000 .

[15]  Janine van der Rijt,et al.  Teachers’ and students’ perceptions of assessments: A review and a study into the ability and accuracy of estimating the difficulty levels of assessment items , 2006 .

[16]  Francisco Herrera,et al.  Ten years of genetic fuzzy systems: current framework and new trends , 2004, Fuzzy Sets Syst..

[17]  Tsukasa Hirashima,et al.  A computational method of complexity of questions on contents of English sentences and its evaluation , 2002, International Conference on Computers in Education, 2002. Proceedings..

[18]  S. Embretson,et al.  Item response theory for psychologists , 2000 .

[19]  Mariana Lilley,et al.  The development and evaluation of a software prototype for computer-adaptive testing , 2004, Comput. Educ..

[20]  Yaser Nouh,et al.  Intelligent Tutoring System-Bayesian Student Model , 2007, 2006 1st International Conference on Digital Information Management.

[21]  A. Viera,et al.  Understanding interobserver agreement: the kappa statistic. , 2005, Family medicine.

[22]  Luisa M. Regueras,et al.  A Diversity-Enhanced Genetic Algorithm to Characterize the Questions of a Competitive e-Learning System , 2010, 2010 10th IEEE International Conference on Advanced Learning Technologies.

[23]  Barbara S. Plake,et al.  Teachers' Ability to Estimate Item Difficulty: A Test of the Assumptions in the Angoff Standard Setting Method , 1998 .

[24]  Edward E. Roskam,et al.  Models for Speed and Time-Limit Tests , 1997 .

[25]  Fong-Lok Lee,et al.  Problem Complexity: A Measure of Problem Difficulty in Algebra by Using Computer , 2000 .

[26]  Jonathan R Anderson,et al.  On Cooperative And Competitive Learning In The Management Classroom , 2006 .

[27]  Relations Between Observed Item Difficulty Levels and Angoff Minimum Passing Levels for a Group of Borderline Examinees , 1999 .

[28]  Hercy N. H. Cheng,et al.  AnswerMatching: A Competitive Learning Game with Uneven Chance Tactic , 2007, 2007 First IEEE International Workshop on Digital Game and Intelligent Toy Enhanced Learning (DIGITEL'07).

[29]  E. Mason,et al.  Response Time and Item Difficulty in a Computer-Based High School Mathematics Course. , 1992 .

[30]  M. Hibou,et al.  Embedded Bayesian network student models , 2004, Information Technology Based Proceedings of the FIfth International Conference onHigher Education and Training, 2004. ITHET 2004..

[31]  Wim van den Noortgate,et al.  Adaptive item-based learning environments based on the item response theory: possibilities and challenges , 2010, J. Comput. Assist. Learn..

[32]  Fong Lok Lee Electronic homework: an intelligent tutoring system in mathematics , 1996 .

[33]  Luisa M. Regueras,et al.  Effects of Competitive E-Learning Tools on Higher Education Students: A Case Study , 2009, IEEE Transactions on Education.

[34]  A. Muijtjens,et al.  Panel expertise for an Angoff standard setting procedure in progress testing: item writers compared to recently graduated students , 2002, Medical education.

[35]  Ramon Lawrence,et al.  Teaching data structures using competitive games , 2004, IEEE Transactions on Education.

[36]  J WIM,et al.  A HIERARCHICAL FRAMEWORK FOR MODELING SPEED AND ACCURACY ON TEST ITEMS , 2007 .

[37]  Irene Cheng,et al.  An Algorithm for Automatic Difficulty Level Estimation of Multimedia Mathematical Test Items , 2008, 2008 Eighth IEEE International Conference on Advanced Learning Technologies.

[38]  Francesco Colace,et al.  A Tutoring Tool Based on Bayesian Approach , 2006 .