Learning Curves, Model Selection and Complexity of Neural Networks

Learning curves show how a neural network is improved as the number of training examples increases and how it is related to the network complexity. The present paper clarifies asymptotic properties and their relation of two learning curves, one concerning the predictive loss or generalization loss and the other the training loss. The result gives a natural definition of the complexity of a neural network. Moreover, it provides a new criterion of model selection.