Terminal attractor learning algorithms for back propagation neural networks
暂无分享,去创建一个
[1] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.
[2] Norio Baba,et al. A new approach for finding the global minimum of error function of neural networks , 1989, Neural Networks.
[3] Michail Zak,et al. Terminal attractors in neural networks , 1989, Neural Networks.
[4] R. H. White. The learning rate in back-propagation systems: an application of Newton's method , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[5] Clark C. Guest,et al. Linear discriminants, logic functions, backpropagation, and improved convergence , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[6] Clark C. Guest,et al. High order neural networks with reduced numbers of interconnection weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[7] A. Owens,et al. Efficient training of the backpropagation network by solving a system of stiff ordinary differential equations , 1989, International 1989 Joint Conference on Neural Networks.
[8] Pietro Burrascano,et al. Smoothing backpropagation cost function by delta constraining , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[9] Bernard Widrow,et al. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[10] Sandeep Gulati,et al. Neutral learning of constrained nonlinear transformations , 1989, Computer.
[11] J. Song,et al. Learning with hidden targets , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[12] P. Burrascano,et al. A learning rule in the Chebyshev norm for multilayer perceptrons , 1990 .