For neural networks, function determines form

It is proved that, generically on nets, the I/O (input-output) behavior uniquely determines the internal form, up to simple symmetries. The sets where this conclusion does not hold are thin in the sense that they are included in sets defined by algebraic equalities. It is shown that, under very weak genericity assumptions, the following is true: assume given two nets, whose neurons all have the same nonlinear activation function sigma ; if the two sets have equal behaviors as 'black boxes', then necessarily they must have the same number of neurons and, except at most for sign reversals at each node, the same weights. The results obtained imply unique identifiability of parameters, under all possible I/O experiments. It is also possible to give a result showing that single experiments are (generically) sufficient for identification, in the analytic case. Some partial results can be obtained even if the precise nonlinearities are not known.<<ETX>>

[1]  A. Isidori Nonlinear Control Systems: An Introduction , 1986 .

[2]  Stephen Grossberg,et al.  Absolute stability of global pattern formation and parallel memory storage by competitive neural networks , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[3]  Hava T. Siegelmann,et al.  Some results on computing with `neural nets , 1992 .

[4]  Eduardo D. Sontag,et al.  Identi ability of discrete-time neural networks , 1993 .

[5]  Eduardo D. Sontag,et al.  UNIQUENESS OF WEIGHTS FOR NEURAL NETWORKS , 1993 .

[6]  Morris W. Hirsch,et al.  Convergent activation dynamics in continuous time networks , 1989, Neural Networks.

[7]  Eduardo Sontag,et al.  Turing computability with neural nets , 1991 .

[8]  Eduardo Sontag,et al.  Algebraic theory of sign-linear systems , 1991, 1991 American Control Conference.

[9]  Anthony M. Bloch,et al.  Nonlinear Dynamical Control Systems (H. Nijmeijer and A. J. van der Schaft) , 1991, SIAM Review.

[10]  J. Hopfield Neurons withgraded response havecollective computational properties likethoseoftwo-state neurons , 1984 .

[11]  Eduardo Sontag On the Observability of Polynomial Systems, I: Finite-Time Problems , 1979 .

[12]  Hava T. Siegelmann,et al.  On the Computational Power of Neural Nets , 1995, J. Comput. Syst. Sci..

[13]  Anthony J. Robinson,et al.  Static and Dynamic Error Propagation Networks with Application to Speech Coding , 1987, NIPS.

[14]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[15]  Eduardo Sontag On the Observability of Polynomial Systems. , 1977 .

[16]  Héctor J. Sussmann,et al.  Uniqueness of the weights for minimal feedforward nets with a given input-output map , 1992, Neural Networks.

[17]  Robert Hecht-Nielsen,et al.  Theory of the backpropagation neural network , 1989, International 1989 Joint Conference on Neural Networks.

[18]  Eduardo D. Sontag,et al.  Mathematical Control Theory: Deterministic Finite Dimensional Systems , 1990 .

[19]  A. Isidori Nonlinear Control Systems , 1985 .