The number of constants in a neural network, such as connection weights and threshold, to be trained may decide directly the complexity of its learning space and, consequently, impact the learning process. It is also probable that the locations of the constants are related to the complexity. In addition, a constant to be trained at the first step of the BP learning may not add to the complexity of the learning space in comparison to those to be trained at the later steps. This paper, reflecting the above perspective, proposes a one-hidden-layer neural network with less complex learning space compared to that of ordinary one-hidden-layer neural networks. In particular, we construct a one-hidden-layer neural network having fewer constants to be trained, most of which are trained at the first step of the BP training. The network has more hidden-layer units than the required minimum for approximation but the number of constants to be trained is smaller. The goal of the network is to overcome the difficulties during statistical learning with dichotomous random teacher signals. As an example, we apply it to the approximation of a Bayesian discriminant function.
[1]
Cidambi Srinivasan,et al.
Discriminant Analysis by a Neural Network with Mahalanobis Distance
,
2006,
ICANN.
[2]
Bruce W. Suter,et al.
The multilayer perceptron as an approximation to a Bayes optimal discriminant function
,
1990,
IEEE Trans. Neural Networks.
[3]
Richard O. Duda,et al.
Pattern classification and scene analysis
,
1974,
A Wiley-Interscience publication.
[4]
Cidambi Srinivasan,et al.
Bayesian Learning of Neural Networks Adapted to Changes of Prior Probabilities
,
2005,
ICANN.
[5]
Ken-ichi Funahashi,et al.
Multilayer neural networks and Bayes decision theory
,
1998,
Neural Networks.
[6]
C. Srinivasan,et al.
Bayesian decision theory on three layered neural networks
,
2001,
The European Symposium on Artificial Neural Networks.