In this paper, we present a new learning method using prior information for three-layer neural networks. Usually when neural networks are used for identification of systems, all of their weights are trained independently, without considering interrelated weights values. Thus, the training results are usually not good. The reason for this in that each parameter has its influence on others during learning. To overcome this problem, we first give an exact mathematical equation that describes the relation between weight values given a set of data conveying prior information. The we present a new learning method that trains part of the weights and calculates the others using these exact mathematical equations. This method often a priori keeps the given mathematical structure exactly the same during learning; in other words, training is done so that the network follows a predetermined trajectory. Numerical computer simulation results are provided to support this approach.
[1]
Hyun Myung,et al.
Time-varying two-phase optimization and its application to neural-network learning
,
1997,
IEEE Trans. Neural Networks.
[2]
Hyun Myung,et al.
Time-Varying Two-Phase Optimization Neural Network
,
1997,
J. Intell. Fuzzy Syst..
[3]
Tianping Chen,et al.
Approximation capability to functions of several variables, nonlinear functionals and operators by radial basis function neural networks
,
1993,
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).
[4]
Jenq-Neng Hwang,et al.
Solving inverse problems by Bayesian neural network iterative inversion with ground truth incorporation
,
1997,
IEEE Trans. Signal Process..