Recurrent learning of input-output stable behaviour in function space: A case study with the Roessler attractor
暂无分享,去创建一个
[1] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[2] Yuichi Nakamura,et al. Approximation of dynamical systems by continuous time recurrent neural networks , 1993, Neural Networks.
[3] Hong Chen,et al. Approximations of continuous functionals by neural networks with application to dynamic systems , 1993, IEEE Trans. Neural Networks.
[4] B. Cessac. Occurrence of Chaos and AT Line in Random Neural Networks. , 1994 .
[5] Oluseyi Olurotimi,et al. Recurrent neural network training with feedforward complexity , 1994, IEEE Trans. Neural Networks.
[6] A. Tesi,et al. New conditions for global stability of neural networks with application to linear and quadratic programming problems , 1995 .
[7] Barak A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: a survey , 1995, IEEE Trans. Neural Networks.
[8] Yuguang Fang,et al. Stability analysis of dynamical neural networks , 1996, IEEE Trans. Neural Networks.
[9] A. Michel,et al. Stability analysis of differential inclusions in Banach space with applications to nonlinear systems with time delays , 1996 .
[10] Helge J. Ritter,et al. Input-Output Stability of Recurrent Neural Networks with Delays Using Circle Criteria , 1998, NC.
[11] David H. Owens,et al. Existence, learning, and replication of periodic motions in recurrent neural networks , 1998, IEEE Trans. Neural Networks.
[12] Jochen J. Steil,et al. Input output stability of recurrent neural networks , 1999 .
[13] Helge J. Ritter,et al. Maximisation of stability ranges for recurrent neural networks subject to on-line adaptation , 1999, ESANN.