Overfitting and undercomputing in machine learning
暂无分享,去创建一个
A central problem in machine learning is supervised learning—that is, learning from labeled training data. For example, a learning system for medical diagnosis might be trained with examples of patients whose case records (medical tests, clinical observations) and diagnoses were known. The task of the learning system is to infer a function that predicts the diagnosis of a patient from his or her case records. The function to be learned might be represented as a set of rules, a decision tree, a Bayes network, or a neural network. Learning algorithms essentially operate by searching some space of functions (usually called the hypothesis class) for a function that fits the given data. Because there are usually exponentially many functions, this search cannot actually examine individual hypothesis functions but instead must use some more direct method of constructing the hypothesis functions from the data. This search can usually be formalized by defining an objective function (e.g., number of data points predicted incorrectly) and applying various algorithms to find a function that minimizes this objective function is NP-hard. For example, fitting the weights of a neural network or finding the smallest decision tree are both NP-complete problems [Blum and Rivest, 1989; Quinlan and Rivest 1989]. Hence, heuristic algorithms such as gradient descent (for neural networks) and greedy search (for decision trees) have been applied with great success. Of course, the suboptimality of such heuristic algorithms ~mmediately suggests a reas&able line of research: find ~lgorithms that can search the hypothesis class better. Hence, there has been extensive research in applying secondorder methods to fit neural networks and in conducting much more thorough searches in learning decision trees and rule sets. Ironically, when these algorithms were tested on real datasets, it was found that their performance was often worse than simrde szradient descent or greedy sear~h [&inlan and Cameron-Jones 1995; Weigend 1994]. In short: it appears to be bet~er not to optimize! One of the other important trends in machine-learning research has been the establishment and nurturing of connections between various previously disparate fields, including computational learning theory, connectionist learning, symbolic learning. and statistics. The . connection to statistics was crucial in resolvins$ this naradox. The-key p~oblem arises from the structure of the machine-learning task, A learning algorithm is trained on a set of training data, but then it is applied to make predictions on new data points. The goal is to maximize its predictive accuracy on the new data points—not necessarily its accuracy on the trammg data. Indeed, if we work too hard to find the very best fit to the training data, there is a risk that we will fit the noise in the data by memorizing various peculiarities
[1] Ronald L. Rivest,et al. Inferring Decision Trees Using the Minimum Description Length Principle , 1989, Inf. Comput..
[2] Andreas Weigend,et al. On overfitting and the effective number of hidden units , 1993 .
[3] R. Mike Cameron-Jones,et al. Oversearching and Layered Search in Empirical Learning , 1995, IJCAI.