Using Directional Fibers to Locate Fixed Points of Recurrent Neural Networks

We introduce mathematical objects that we call “directional fibers,” and show how they enable a new strategy for systematically locating fixed points in recurrent neural networks. We analyze this approach mathematically and use computer experiments to show that it consistently locates many fixed points in many networks with arbitrary sizes and unconstrained connection weights. Comparison with a traditional method shows that our strategy is competitive and complementary, often finding larger and distinct sets of fixed points. We provide theoretical groundwork for further analysis and suggest next steps for developing the method into a more powerful solver.

[1]  Huaguang Zhang,et al.  A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[2]  John J. Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities , 1999 .

[3]  Jehoshua Bruck,et al.  On the number of spurious memories in the Hopfield model , 1990, IEEE Trans. Inf. Theory.

[4]  James A. Reggia,et al.  Engineering neural systems for high-level problem solving , 2016, Neural Networks.

[5]  Zhigang Zeng,et al.  Multiperiodicity of Discrete-Time Delayed Neural Networks Evoked by Periodic External Inputs , 2006, IEEE Transactions on Neural Networks.

[6]  E. Allgower,et al.  Numerical path following , 1997 .

[7]  H. Sompolinsky,et al.  Theory of orientation tuning in visual cortex. , 1995, Proceedings of the National Academy of Sciences of the United States of America.

[8]  Daniel Soudry,et al.  No bad local minima: Data independent training error guarantees for multilayer neural networks , 2016, ArXiv.

[9]  Pablo Varona,et al.  Hierarchical dynamics of informational patterns and decision-making , 2016, Proceedings of the Royal Society B: Biological Sciences.

[10]  Daniel J. Amit,et al.  Modeling brain function: the world of attractor neural networks, 1st Edition , 1989 .

[11]  Mahesan Niranjan,et al.  A theoretical investigation into the performance of the Hopfield model , 1990, IEEE Trans. Neural Networks.

[12]  Martin Berz,et al.  Rigorous high-precision enclosures of fixed points and their invariant manifolds , 2011 .

[13]  Ali A. Minai,et al.  Reliable storage and recall of aperiodic spatiotemporal activity patterns using scaffolded attractors , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[14]  W. Newsome,et al.  Context-dependent computation by recurrent dynamics in prefrontal cortex , 2013, Nature.

[15]  James A. Reggia,et al.  Identifying Fixed Points in Recurrent Neural Networks using Directional Fibers: Supplemental Material on Theoretical Results and Practical Aspects of Numerical Traversal , 2016 .

[16]  Yoonsuck Choe,et al.  Dynamical analysis of recurrent neural circuits in articulated limb controllers for tool use , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[17]  A. Sard,et al.  The measure of the critical values of differentiable maps , 1942 .

[18]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[19]  J. Yorke,et al.  Finding zeroes of maps: homotopy methods that are constructive with probability one , 1978 .

[20]  Ramón Huerta,et al.  Transient Cognitive Dynamics, Metastability, and Decision Making , 2008, PLoS Comput. Biol..

[21]  J. J. Hopfield,et al.  “Neural” computation of decisions in optimization problems , 1985, Biological Cybernetics.

[22]  David Sussillo,et al.  Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks , 2013, Neural Computation.