Overfitting avoidance in induction has often been treated as if it statistically increases expected predictive accuracy. In fact, there is no statistical basis for believing it will have this effect. Overfitting avoidance is simply a form of bias and, as such, its effect on expected accuracy depends, not on statistics, but on the degree to which this bias is appropriate to a problem-generating domain. This paper identifies one important factor that affects the degree to which the bias of overfitting avoidance is appropriate--the abundance of training data relative to the complexity of the relationship to be induced--and shows empirically how it determines whether such methods as pessimistic and cross-validated cost-complexity pruning will increase or decrease predictive accuracy in decision tree induction. The effect of sparse data is illustrated first in an artificial domain and then in more realistic examples drawn from the UCI machine learning database repository.
[1]
Cullen Schaffer,et al.
Deconstructing the Digit Recognition Problem
,
1992,
ML.
[2]
Ivan Bratko,et al.
On Estimating Probabilities in Tree Pruning
,
1991,
EWSL.
[3]
Ronald L. Rivest,et al.
Inferring Decision Trees Using the Minimum Description Length Principle
,
1989,
Inf. Comput..
[4]
J. Ross Quinlan,et al.
Simplifying Decision Trees
,
1987,
Int. J. Man Mach. Stud..
[5]
Cullen Schaffer.
When Does Overfitting Decrease Prediction Accuracy in Induced Decision Trees and Rule Sets?
,
1991,
EWSL.
[6]
John Mingers,et al.
Expert Systems—Rule Induction with Statistical Data
,
1987
.