Fractal encoding of context‐free grammars in connectionist networks

Connectionist network learning of context-free languages has so far been applied only to very simple cases and has often made use of an external stack. Learning complex context-free languages with a homogeneous neural mechanism looks like a much harder problem. The current paper takes a step toward solving this problem by analyzing context-free grammar computation (without addressing learning) in a class of analog computers called dynamical automata, which are naturally implemented in connectionist networks. The result is a widely applicable method of using fractal sets to organize infinite-state computations in a bounded state space. An appealing consequence is the development of parameter-space maps, which locate various complex computers in spatial relationships to one another. An example suggests that such a global perspective on the organization of the parameter space may be helpful for solving the hard problem of getting connectionist networks to learn complex grammars from examples.

[1]  S. Smale,et al.  On a theory of computation and complexity over the real numbers; np-completeness , 1989 .

[2]  Olivier Bournez,et al.  On the Computational Power of Dynamical Systems and Hybrid Systems , 1996, Theor. Comput. Sci..

[3]  Padhraic Smyth,et al.  Discrete recurrent neural networks for grammatical inference , 1994, IEEE Trans. Neural Networks.

[4]  C. Lee Giles,et al.  Using Prior Knowledge in a {NNPDA} to Learn Context-Free Languages , 1992, NIPS.

[5]  Paul Rodríguez,et al.  A Recurrent Neural Network that Learns to Count , 1999, Connect. Sci..

[6]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[7]  L. Perko Differential Equations and Dynamical Systems , 1991 .

[8]  Peter Tiño,et al.  Spatial representation of symbolic sequences through iterative function systems , 1999, IEEE Trans. Syst. Man Cybern. Part A.

[9]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[10]  J. Crutchfield The calculi of emergence: computation, dynamics and induction , 1994 .

[11]  B. Kendall Nonlinear Dynamics and Chaos , 2001 .

[12]  Cristopher Moore,et al.  Recursion Theory on the Reals and Continuous-Time Computation , 1996, Theor. Comput. Sci..

[13]  Michael C. Mozer,et al.  A Connectionist Symbol Manipulator that Discovers the Structure of Context-Free Languages , 1992, NIPS.

[14]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[15]  W. Tabor Dynamical Automata , 1998 .

[16]  Cristopher Moore,et al.  Dynamical Recognizers: Real-Time Language Recognition by Analog Computers , 1998, Theor. Comput. Sci..

[17]  Hava T. Siegelmann,et al.  The Simple Dynamics of Super Turing Theories , 1996, Theor. Comput. Sci..

[18]  C. Lee Giles,et al.  Higher Order Recurrent Networks and Grammatical Inference , 1989, NIPS.

[19]  P. Tiňo,et al.  Constructing finite-context sources from fractal representations of symbolic sequences , 1998 .

[20]  Renée Elio,et al.  A theory of grammatical induction in the connectionist paradigm , 1996 .

[21]  Dana Ron,et al.  The Power of Amnesia , 1993, NIPS.

[22]  Michael F. Barnsley,et al.  Fractals everywhere , 1988 .

[23]  Jordan B. Pollack,et al.  Analysis of Dynamical Recognizers , 1997, Neural Computation.

[24]  Eduardo Sontag,et al.  Turing computability with neural nets , 1991 .

[25]  James P. Crutchfield,et al.  Computation at the Onset of Chaos , 1991 .