Recent work by Mingers and by Buntine and Niblett on the performance of various attribute selection measures has addressed the topic of random selection of attributes in the construction of decision trees. This article is concerned with the mechanisms underlying the relative performance of conventional and random attribute selection measures. The three experiments reported here employed synthetic data sets, constructed so as to have the precise properties required to test specific hypotheses. The principal underlying idea was that the performance decrement typical of random attribute selection is due to two factors. First, there is a greater chance that informative attributes will be omitted from the subset selected for the final tree. Second, there is a greater risk of overfitting, which is caused by attributes of little or no value in discriminating between classes being “locked in” to the tree structure, near the root. The first experiment showed that the performance decrement increased with the number of available pure-noise attributes. The second experiment indicated that there was little decrement when all the attributes were of equal importance in discriminating between classes. The third experiment showed that a rather greater performance decrement (than in the second experiment) could be expected if the attributes were all informative, but to different degrees.
[1]
S S Stevens,et al.
On the Theory of Scales of Measurement.
,
1946,
Science.
[2]
G. Keppel,et al.
Design and Analysis: A Researcher's Handbook
,
1976
.
[3]
Leo Breiman,et al.
Classification and Regression Trees
,
1984
.
[4]
Ivan Bratko,et al.
Experiments in automatic learning of medical diagnostic rules
,
1984
.
[5]
Allan P. White,et al.
Predictor: An Alternative Approach to Uncertain Inference in Expert Systems
,
1985,
IJCAI.
[6]
I. Bratko,et al.
Learning decision rules in noisy domains
,
1987
.
[7]
A. P. White,et al.
Probabilistic induction by dynamic part generation in virtual trees
,
1987
.
[8]
J. R. Quilan.
Decision trees and multi-valued attributes
,
1988
.
[9]
Wray L. Buntine,et al.
A Further Comparison of Splitting Rules for Decision-Tree Induction
,
1992,
Machine Learning.
[10]
John Mingers,et al.
An empirical comparison of selection measures for decision-tree induction
,
2004,
Machine Learning.
[11]
J. Ross Quinlan,et al.
Induction of Decision Trees
,
1986,
Machine Learning.
[12]
John Mingers,et al.
An Empirical Comparison of Pruning Methods for Decision Tree Induction
,
1989,
Machine Learning.