A Layer-by-Layer Least Squares based Recurrent Networks Training Algorithm: Stalling and Escape
暂无分享,去创建一个
[1] Chung-Ming Kuan. A recurrent Newton algorithm and its convergence properties , 1995, IEEE Trans. Neural Networks.
[2] Tommy W. S. Chow,et al. Accelerated training algorithm for feedforward neural networks based on least squares method , 2005, Neural Processing Letters.
[3] Lee A. Feldkamp,et al. Recurrent network training with the decoupled-extended-Kalman-filter algorithm , 1992, Defense, Security, and Sensing.
[4] Ronald J. Williams,et al. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.
[5] Lee A. Feldkamp,et al. Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks , 1994, IEEE Trans. Neural Networks.
[6] Ronald J. Williams,et al. Training recurrent networks using the extended Kalman filter , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[7] Tommy W. S. Chow,et al. Extended least squares based algorithm for training feedforward networks , 1997, IEEE Trans. Neural Networks.
[8] Frank Bärmann,et al. A learning algorithm for multilayered neural networks based on linear least squares problems , 1993, Neural Networks.
[9] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.