Comparison of convergence and stability properties for the state and output solutions of neural networks

A typical neuron cell is characterized by the state variable and the neuron output, which is obtained by passing the state through a nonlinear active device implementing the neuron activation. The paper introduces a wide class of neural networks for which the state solutions and the output solutions enjoy the same convergence and stability properties. The class, which includes as a special case the standard cellular neural networks, is characterized by piecewise-linear Lipschitz continuous neuron activations, Lipschitz continuous (possibly) high-order interconnections between neurons and asymptotically stable isolated neuron cells. The paper also shows that if we relax any of the assumptions on the smoothness of the neuron activations or interconnecting structure, or on the stability of the isolated neuron cells, then the equivalence between the convergence properties of the state solutions and the output solutions is in general no longer guaranteed. To this end, three relevant classes of neural networks in the literature are considered, where each class violates one of the assumptions made in the paper, and it is shown that the state solutions of the networks enjoy stronger convergence properties with respect to the output solutions or viceversa. Copyright © 2010 John Wiley & Sons, Ltd.

[1]  Mauro Forti,et al.  Convergence of Neural Networks for Programming Problems via a Nonsmooth Łojasiewicz Inequality , 2006, IEEE Transactions on Neural Networks.

[2]  Leon O. Chua,et al.  Cellular neural networks: applications , 1988 .

[3]  Ronald Tetzlaff,et al.  Special issue on cellular wave computing architectures, Part II , 2009 .

[4]  Jun Wang,et al.  Global output convergence of a class of continuous-time recurrent neural networks with time-varying thresholds , 2004, IEEE Transactions on Circuits and Systems II: Express Briefs.

[5]  Wolfgang Porod,et al.  Qualitative analysis and synthesis of a class of neural networks , 1988 .

[6]  Mauro Forti M-matrices and global convergence of discontinuous neural networks , 2007, Int. J. Circuit Theory Appl..

[7]  Morris W. Hirsch,et al.  Convergent activation dynamics in continuous time networks , 1989, Neural Networks.

[8]  M. Forti,et al.  Global convergence of neural networks with discontinuous neuron activations , 2003 .

[9]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[10]  Jinde Cao,et al.  Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays , 2004, Neural Networks.

[11]  E. Kaszkurewicz,et al.  Robust stability and diagonal Lyapunov functions , 1993 .

[12]  Jun Wang,et al.  Convergence Analysis of a Class of Nonsmooth Gradient Systems , 2008, IEEE Transactions on Circuits and Systems I: Regular Papers.

[13]  Pierre-Antoine Absil,et al.  Continuous dynamical systems that realize discrete optimization on the hypercube , 2004, Syst. Control. Lett..

[14]  Frank S. Werblin,et al.  Analysis of the interaction between the retinal ON and OFF channels using CNN-UM models , 2009 .

[15]  Mauro Forti Some extensions of a new method to analyze complete stability of neural networks , 2002, IEEE Trans. Neural Networks.

[16]  Zhang Yi,et al.  Output convergence analysis for a class of delayed recurrent neural networks with time-varying inputs , 2006, IEEE Trans. Syst. Man Cybern. Part B.

[17]  Lin-Bao Yang,et al.  Cellular neural networks: theory , 1988 .

[18]  Leon O. Chua,et al.  Neural networks for nonlinear programming , 1988 .

[19]  S. Abe,et al.  Global convergence of the Hopfield neural network with nonzero diagonal elements , 1995 .

[20]  Edwin K. P. Chong,et al.  An analysis of a class of neural networks for solving linear programming problems , 1999, IEEE Trans. Autom. Control..

[21]  Alberto Tesi,et al.  The Lojasiewicz Exponent at an Equilibrium Point of a Standard CNN is 1/2 , 2006, Int. J. Bifurc. Chaos.

[22]  Leon O. Chua,et al.  UNIVERSAL CNN CELLS , 1999 .

[23]  Robert W. Newcomb,et al.  A multilevel neural network for A/D conversion , 1993, IEEE Trans. Neural Networks.

[24]  Kwong-Sak Leung,et al.  Convergence analysis of cellular neural networks with unbounded delay , 2001 .

[25]  Mathukumalli Vidyasagar Minimum-seeking properties of analog neural networks with multilinear objective functions , 1995, IEEE Trans. Autom. Control..

[26]  Mauro Forti,et al.  Generalized neural network for nonsmooth nonlinear programming problems , 2004, IEEE Transactions on Circuits and Systems I: Regular Papers.

[27]  A. Tesi,et al.  New conditions for global stability of neural networks with application to linear and quadratic programming problems , 1995 .

[28]  Leon O. Chua,et al.  Stability of a class of nonreciprocal cellular neural networks , 1990 .

[29]  Jinde Cao,et al.  Global output convergence of recurrent neural networks with distributed delays , 2007 .

[30]  Jinde Cao,et al.  Novel Stability Criteria for Delayed Cellular Neural Networks , 2003, Int. J. Neural Syst..

[31]  Jinde Cao,et al.  Multistability and multiperiodicity of delayed Cohen–Grossberg neural networks with a general class of activation functions , 2008 .

[32]  Péter Szolgay,et al.  Implementation of embedded emulated-digital CNN-UM global analogic programming unit on FPGA and its application , 2008 .

[33]  Jun Wang,et al.  A Novel Recurrent Neural Network for Solving Nonlinear Optimization Problems With Inequality Constraints , 2008, IEEE Transactions on Neural Networks.

[34]  Tianping Chen,et al.  Dynamical Behaviors of Delayed Neural Network Systems with Discontinuous Activation Functions , 2006 .

[35]  Jun Wang A recurrent neural network for solving the shortest path problem , 1996 .

[36]  Jinde Cao,et al.  On periodic solutions of neural networks via differential inclusions , 2009, Neural Networks.

[37]  A. Michel,et al.  Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube , 1989 .

[38]  Guido De Sandre,et al.  Piecewise-exponential approximation for fast time-domain simulation of 2-D cellular neural networks , 2004, IEEE Transactions on Circuits and Systems II: Express Briefs.