The concept of measure functions for generalization performance is suggested. This concept provides an alternative way of designing and selecting generalization algorithms. In addition, it makes a clear distinction between modeling and solving a computational problem. The modeling is captured in a measure function that, for each possible combination of a training set and a generalization, assigns a value describing how good the generalization is. The computational problem is to find a generalization maximizing the measure function. With this concept in use, some recently debated facts about quality of generalization will become clarified. In addition to their theoretical relevance, we argue that measure functions are of great value for practical applications. For instance, (i) they force us to make explicit the relevant features of the generalization problem at hand, (ii) they provide a deeper understanding of existing generalization algorithms, and (iii) they help us in the construction of problem-specific algorithms. We illustrate the second point by an experiment that indicates that the difference between generalizations computed by different algorithms is often smaller than the difference between the generalizations computed by different versions of the same algorithm. The third point is supported by a novel algorithm based on incremental search for a generalization that optimizes a given measure function.
[1]
J. Rissanen,et al.
Modeling By Shortest Data Description*
,
1978,
Autom..
[2]
H. Akaike.
A new look at the statistical model identification
,
1974
.
[3]
Cullen Schaffer,et al.
A Conservation Law for Generalization Performance
,
1994,
ICML.
[4]
J. Ross Quinlan,et al.
C4.5: Programs for Machine Learning
,
1992
.
[5]
David G. Lowe,et al.
Similarity Metric Learning for a Variable-Kernel Classifier
,
1995,
Neural Computation.
[6]
W. Spears,et al.
For Every Generalization Action, Is There Really an Equal and Opposite Reaction?
,
1995,
ICML.
[7]
David J. Spiegelhalter,et al.
Machine Learning, Neural and Statistical Classification
,
2009
.
[8]
Ron Kohavi,et al.
A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
,
1995,
IJCAI.
[9]
D. Wolpert.
OFF-TRAINING SET ERROR AND A PRIORI DISTINCTIONS BETWEEN LEARNING ALGORITHMS
,
1994
.
[10]
Belur V. Dasarathy,et al.
Nearest neighbor (NN) norms: NN pattern classification techniques
,
1991
.
[11]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[12]
E. Parzen.
Multiple Time Series: Determining the Order of Approximating Autoregressive Schemes.
,
1975
.