The statistical asymptotic theory is often used in theoretical results in computational and statistical learning theory. It describes the limiting distribution of the maximum likelihood estimator (MLE) as an normal distribution. However, in layered models such as neural networks, the regularity condition of the asymptotic theory is not necessarily satisfied. The true parameter is not identifiable, if the target function can be realized by a network of smaller size than the size of the model. There has been little known on the behavior of the MLE in these cases of neural networks. In this paper, we analyze the expectation of the generalization error of three-layer linear neural networks, and elucidate a strange behavior in unidentifiable cases. We show that the expectation of the generalization error in the unidentifiable cases is larger than what is given by the usual asymptotic theory, and dependent on the rank of the target function.
[1]
Tosio Kato.
Perturbation theory for linear operators
,
1966
.
[2]
K. Wachter.
The Strong Limits of Random Matrix Spectra for Sample Matrices of Independent Elements
,
1978
.
[3]
Katsuyuki Hagiwara,et al.
On the problem of applying AIC to determine the structure of a layered feedforward neural network
,
1993,
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).
[4]
Kurt Hornik,et al.
Learning in linear neural networks: a survey
,
1995,
IEEE Trans. Neural Networks.
[5]
Kenji Fukumizu,et al.
A Regularity Condition of the Information Matrix of a Multilayer Perceptron Network
,
1996,
Neural Networks.
[6]
Neil H. Timm,et al.
Multivariate Reduced-Rank Regression
,
1999,
Technometrics.