We derive an upper bound on the generalization error of classi ers from a certain class of threshold networks. The bound depends on the margin of the classi er and the average complexity of the hidden units (where the average is over the weights assigned to each hidden unit). By representing convex combinations of decision trees or mask perceptrons as such threshold networks we obtain similar bounds on the generalization error of these classi ers. These bounds have immediate application to combinations of decision trees or mask perceptrons by majority vote which appear in techniques such as boosting, bagging and arcing. For combined decision trees, previous bounds depend on either the complexity of the most complex decision tree in the combination or the average complexity of the individual decision trees, where the complexity of each decision tree depends on the total number of leaves in the tree. The bound in this paper depends on the average complexity of the individual decision trees, where the complexity of each decision tree depends on the e ective number of leaves, a quantity which can be signi cantly smaller than the total number of leaves.
[1]
Adam Kowalczyk,et al.
Developing higher-order networks with empirically selected units
,
1994,
IEEE Trans. Neural Networks.
[2]
Yoav Freund,et al.
Experiments with a New Boosting Algorithm
,
1996,
ICML.
[3]
Leo Breiman,et al.
Bias, Variance , And Arcing Classifiers
,
1996
.
[4]
Yoav Freund,et al.
Boosting the margin: A new explanation for the effectiveness of voting methods
,
1997,
ICML.
[5]
Peter L. Bartlett,et al.
Generalization in Decision Trees and DNF: Does Size Matter?
,
1997,
NIPS.
[6]
Leo Breiman,et al.
Bagging Predictors
,
1996,
Machine Learning.