A Neural Network having Fewer Inner Constants to be Trained and Bayesian Decision

The number of constants in a neural network, such as connection weights and threshold, to be trained may decide directly the complexity of its learning space and, consequently, impact the learning process. It is also probable that the locations of the constants are related to the complexity. In addition, a constant to be trained at the first step of the BP learning may not add to the complexity of the learning space in comparison to those to be trained at the later steps. This paper, reflecting the above perspective, proposes a one-hidden-layer neural network with less complex learning space compared to that of ordinary one-hidden-layer neural networks. In particular, we construct a one-hidden-layer neural network having fewer constants to be trained, most of which are trained at the first step of the BP training. The network has more hidden-layer units than the required minimum for approximation but the number of constants to be trained is smaller. The goal of the network is to overcome the difficulties during statistical learning with dichotomous random teacher signals. As an example, we apply it to the approximation of a Bayesian discriminant function.