A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
暂无分享,去创建一个
[1] L. Mcbride,et al. Optimization of time-varying systems , 1965 .
[2] J J Hopfield,et al. Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.
[3] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[4] Alan S. Lapedes,et al. A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognition , 1986 .
[5] D. Rumelhart. Learning Internal Representations by Error Propagation, Parallel Distributed Processing , 1986 .
[6] Tad Hogg,et al. A Dynamical Approach to Temporal Pattern Processing , 1987, NIPS.
[7] Fernando J. Pineda,et al. Dynamics and architecture for neural computation , 1988, J. Complex..
[8] James L. McClelland,et al. Learning Subsequential Structure in Simple Recurrent Networks , 1988, NIPS.
[9] Barak A. Pearlmutter. Learning state space trajectories in recurrent neural networks : a preliminary report. , 1988 .
[10] Barak A. Pearlmutter. Learning State Space Trajectories in Recurrent Neural Networks , 1988, Neural Computation.
[11] Michael C. Mozer,et al. A Focused Backpropagation Algorithm for Temporal Pattern Recognition , 1989, Complex Syst..
[12] Jeffrey L. Elman,et al. Finding Structure in Time , 1990, Cogn. Sci..
[13] L. B. Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment , 1990 .
[14] Michael I. Jordan. Attractor dynamics and parallelism in a connectionist sequential machine , 1990 .