On Initializations of BP Networks with a Single Hidden Layer and Normalizations of Training Data
暂无分享,去创建一个
This paper proposes an initialization of bipolar networks with a hidden layer trained for pattern classification problems. Weights of the hidden layer are initialized so that hyperplanes should pass through the center of input pattern set, and those of the output layer are initialized to zero. The initialization principle for the hidden layer can be equivalently realized by the usual random initialization when the training data are normalized so that their average should be zero. Thus the procedure for our initialization becomes quite simple. From several simulation results, it is confirmed that the proposed initialization shows better convergence than the usual initialization in which all the weights take the random numbers.