Evaluation, Revision, and Learning

The first part of this chapter is basic and is of interest to readers who want to build or maintain a CBR system. The second part provides background knowledge. It deals with revising the methods if something is definitely wrong and with improving CBR systems by machine learning if the results are weak. In this chapter methods for evaluating, revising and improving CBR are discussed in that order. Evaluation detects the weaknesses of a system. Revise is an activity from outside for removing individual faults and related problems. Learning considers improvement of the whole system. It deals all knowledge containers of a CBR system as well as the overall system. For improving the case base, the three classic algorithms IB1, IB2, and IB3 are presented. The learning of similarities is concerned with similarity relations, weights, and local similarities. As a crucial step in similarity learning, the learning of weights is discussed. The chapter also presents some machine learning methods that are used within and in integration with CBR, including regression learning, artificial neural networks, genetic algorithms, clustering algorithms, and Bayesian learning. These methods that are used within and in integration with CBR.

[1]  Jean Lieber Application of the Revision Theory to Adaptation in Case-Based Reasoning: The Conservative Adaptation , 2007, ICCBR.

[2]  David W. Aha,et al.  Noise-Tolerant Instance-Based Learning Algorithms , 1989, IJCAI.

[3]  David W. Aha,et al.  Instance-Based Learning Algorithms , 1991, Machine Learning.

[4]  Masoud Rahgozar,et al.  Finding Similarity Relations in Presence of Taxonomic Relations in Ontology Learning Systems , 2007, 2007 IEEE Symposium on Computational Intelligence and Data Mining.

[5]  Stefan Wess,et al.  Learning in Case-Based Classification Algorithms , 1995, GOSLER Final Report.

[6]  Hiroshi Motoda,et al.  Feature Extraction, Construction and Selection: A Data Mining Perspective , 1998 .

[7]  Steffen Lange,et al.  Algorithmic Learning for Knowledge-Based Systems , 1995, Lecture Notes in Computer Science.

[8]  Michael M. Richter,et al.  A Comparative Study of Attribute Weighting Techniques for Software Defect Prediction Using Case-based Reasoning , 2010, SEKE.

[9]  Henry Tirri,et al.  On Bayesian Case Matching , 1998, EWCBR.

[10]  Ingoo Han,et al.  Case-based reasoning supported by genetic algorithms for corporate bond rating , 1999 .

[11]  Ken Satoh,et al.  Toward PAC-Learning of Weights from Qualitative Distance Information , 1994 .

[12]  Sergio Escalera,et al.  Quality Enhancement Based on Reinforcement Learning and Feature Weighting for a Critiquing-Based Recommender , 2009, ICCBR.

[13]  Agnar Aamodt,et al.  Integrating Bayesian Networks into Knowledge-Intensive CBR , 1998 .

[14]  Jean Lieber,et al.  Representing Case Variations for Learning General and Specific Adaptation Rules , 2008, STAIRS.

[15]  Theo Tryfonas,et al.  Frontiers in Artificial Intelligence and Applications , 2009 .

[16]  Christoph Globig,et al.  On case-based learnability of languages , 1997, New Generation Computing.

[17]  Young-hwan Bang,et al.  CBR (Case-Based Reasoning) Evaluation Modeling for Security Risk Analysis in Information Security System , 2008, 2008 International Conference on Security Technology.

[18]  David A. Aha,et al.  Cooperative Bayesian and Case-Based Reasoning for Solving Multiagent Planning Tasks , 1996 .

[19]  Sang-Chan Park,et al.  Feature-Weighted CBR with Neural Network for Symbolic Features , 2006, ICIC.

[20]  Barry Smyth,et al.  Retrieval, reuse, revision and retention in case-based reasoning , 2005, The Knowledge Engineering Review.

[21]  David W. Aha,et al.  Feature Weighting for Lazy Learning Algorithms , 1998 .

[22]  B. Efron,et al.  A Leisurely Look at the Bootstrap, the Jackknife, and , 1983 .