The Evolution of a Feedforward Neural Network trained under Backpropagation
暂无分享,去创建一个
This paper presents a theoretical and empirical analysis of the evolution of a feedforward neural network (FFNN) trained using backpropagation (BP). The results of two sets of experiments axe presented which illustrate the nature of BP’s search through weight space as the network learns to classify the training data. The search is shown to be driven by the initial values of the weights in the output layer of neurons.
[1] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.
[2] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[3] Ishwar K. Sethi,et al. Entropy nets: from decision trees to neural networks , 1990, Proc. IEEE.
[4] Paul F. M. J. Verschure. Chaos-based Learning , 1991, Complex Syst..
[5] Sommers,et al. Chaos in random neural networks. , 1988, Physical review letters.