Given a set of training examples in the form of (input, output) pairs, induction generates a set of rules that when applied to an input example, can come up with a target output or class for that example. At deduction time, these rules can be applied to a pre-classified test set to evaluate their accuracy. With existing rule induction systems, the rules are "frozen" on the training set, and they cannot adapt to a changing distribution of examples. In this paper we propose two approaches to dynamically refine the rules at deduction time, to overcome this limitation. For each test example, we perform a classification using existing rules. Depending on whether the classification is correct or not, the rule which was responsible for the classification is refined. When the correct classification is found, we refine the associated rule in one of two ways: by increasing the coverages of all conjunctions associated with the rule, or by increasing the coverage of the rule's most important conjunction only for the test example in question. These refined rules are then used for deducing the classifications for remaining examples. Of the two deduction methods, the second method has been shown to significantly improve the accuracy of the rules when compared to the regular non-dynamic deduction process.
[1]
Ryszard S. Michalski,et al.
Variable-Valued Logic and Its Applications to Pattern Recognition and Machine Learning
,
1975
.
[2]
Xindong Wu.
Knowledge Acquisition from Databases
,
1995
.
[3]
G. G. Stokes.
"J."
,
1890,
The New Yale Book of Quotations.
[4]
Xindong Wu.
Rule induction with extension matrices
,
1998,
KDD 1998.
[5]
Peter Clark,et al.
The CN2 induction algorithm
,
2004,
Machine Learning.
[6]
Nada Lavrac,et al.
The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains
,
1986,
AAAI.
[7]
Jiarong Hong,et al.
AE1: An extension matrix approximate method for the general covering problem
,
1985,
International Journal of Computer & Information Sciences.