Hopfield net generation, encoding and classification of temporal trajectories

Hopfield network transient dynamics have been exploited for resolving both path planning and temporal pattern classification. For these problems Lagrangian techniques and two well-known learning algorithms for recurrent networks have been used. For path planning, the Williams and Zisper's learning algorithm has been implemented and a set of temporal trajectories which join two points, pass through others, avoid obstacles and jointly form the shortest path possible are discovered and encoded in the weights of the net. The temporal pattern classification is based on an extension of the Pearlmutter's algorithm for the generation of temporal patterns which is obtained by means of variational methods. The algorithm is applied to a simple problem of recognizing five temporal trajectories with satisfactory robustness to distortions.

[1]  Richard S. Sutton,et al.  Reinforcement learning architectures for animats , 1991 .

[2]  Judith E. Dayhoff,et al.  Trajectory recognition with a time-delay neural network , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[3]  R. Courant,et al.  Methods of Mathematical Physics , 1962 .

[4]  Marco Saerens,et al.  Classification of temporal trajectories by continuous-time recurrent nets , 1994, Neural Networks.

[5]  Y. C. Lee,et al.  Time warping recurrent neural networks and trajectory classification , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[6]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[7]  Bill Baird,et al.  Bifurcation and category learning in network models of oscilating cortex , 1990 .

[8]  Yang He,et al.  2-D Shape Classification Using Hidden Markov Model , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Kenji Doya,et al.  Adaptive neural oscillator using continuous-time back-propagation learning , 1989, Neural Networks.

[10]  Joachim M. Buhmann,et al.  Pattern Segmentation in Associative Memory , 1990, Neural Computation.

[11]  Terrence J. Sejnowski,et al.  Faster Learning for Dynamic Recurrent Backpropagation , 1990, Neural Computation.

[12]  Jacob Barhen,et al.  Fast temporal neural learning using teacher forcing , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[13]  Shahriar Najand,et al.  Application of Self-Organizing Neural Networks for Mobile Robot Environment Learning , 1993 .

[14]  L. K. Li Approximation theory and recurrent networks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[15]  Fernando J. Pineda,et al.  Time Dependent Adaptive Neural Networks , 1989, NIPS.

[16]  M. Gherrity,et al.  A learning algorithm for analog, fully recurrent neural networks , 1989, International 1989 Joint Conference on Neural Networks.

[17]  Pierre Roussel-Ragot,et al.  Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms , 1993, Neural Computation.

[18]  Jürgen Schmidhuber,et al.  A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks , 1992, Neural Computation.

[19]  Rama Chellappa,et al.  Stochastic models for closed boundary analysis: Representation and reconstruction , 1981, IEEE Trans. Inf. Theory.

[20]  Masa-aki Sato A learning algorithm to teach spatiotemporal patterns to recurrent neural networks , 2004, Biological Cybernetics.

[21]  Richard Durbin,et al.  An analogue approach to the travelling salesman problem using an elastic net method , 1987, Nature.

[22]  Yoshiki Uchikawa,et al.  Learning process of recurrent neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[23]  Filson H. Glanz,et al.  An Autoregressive Model Approach to Two-Dimensional Shape Classification , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[24]  Jacob Barhen,et al.  Adjoint-Functions and Temporal Learning Algorithms in Neural Networks , 1990, NIPS.

[25]  Michael I. Jordan Attractor dynamics and parallelism in a connectionist sequential machine , 1990 .

[26]  Didier Keymeulen,et al.  A Reactive Robot Navigation System Based on a Fluid Dynamics Metaphor , 1990, PPSN.

[27]  Michael Lemmon,et al.  2-Degree-of-freedom Robot Path Planning using Cooperative Neural Fields , 1991, Neural Computation.

[28]  Yann LeCun,et al.  Reverse TDNN: An Architecture For Trajectory Generation , 1991, NIPS.

[29]  Ching Y. Suen,et al.  The State of the Art in Online Handwriting Recognition , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[30]  Paul J. Werbos,et al.  Backpropagation Through Time: What It Does and How to Do It , 1990, Proc. IEEE.

[31]  Masa-aki Sato,et al.  APOLONN brings us to the real world: learning nonlinear dynamics and fluctuations in nature , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[32]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[33]  Alan L. Yuille,et al.  Generalized Deformable Models, Statistical Physics, and Matching Problems , 1990, Neural Computation.

[34]  Jacob Barhen,et al.  Learning a trajectory using adjoint functions and teacher forcing , 1992, Neural Networks.

[35]  Barak A. Pearlmutter Learning state space trajectories in recurrent neural networks : a preliminary report. , 1988 .

[36]  Sylvie Thiria,et al.  Automata networks and artificial intelligence , 1987 .

[37]  Oussama Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1986 .

[38]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[39]  M. Usher,et al.  Parallel Activation of Memories in an Oscillatory Neural Network , 1991, Neural Computation.

[40]  Jing Peng,et al.  An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories , 1990, Neural Computation.