Convergence of a neural network for sparse approximation using the nonsmooth Łojasiewicz inequality

Sparse approximation is an optimization program that produces state-of-the-art results in many applications in signal processing and engineering. To deploy this approach in real-time, it is necessary to develop faster solvers than are currently available in digital. The Locally Competitive Algorithm (LCA) is a dynamical system designed to solve the class of sparse approximation problems in continuous time. But before implementing this network in analog VLSI, it is essential to provide performance guarantees. This paper presents new results on the convergence of the LCA neural network. Using recently-developed methods that make use of the Łojasiewicz inequality for nonsmooth functions, we prove that the output and state trajectories converge to a single fixed point. This improves on previous results by guaranteeing convergence to a singleton even when the optimization program has infinitely many and non-isolated solution points.

[1]  Alberto Tesi,et al.  Absolute stability of analytic neural networks: an approach based on finite trajectory length , 2004, IEEE Transactions on Circuits and Systems I: Regular Papers.

[2]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[3]  Christopher J. Rozell,et al.  Low Power Sparse Approximation on Reconfigurable Analog Hardware , 2012, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[4]  Jun Wang,et al.  Convergence Analysis of a Class of Nonsmooth Gradient Systems , 2008, IEEE Transactions on Circuits and Systems I: Regular Papers.

[5]  Christopher J. Rozell,et al.  Configurable hardware integrate and fire neurons for sparse approximation , 2013, Neural Networks.

[6]  Mauro Forti,et al.  Convergence of Neural Networks for Programming Problems via a Nonsmooth Łojasiewicz Inequality , 2006, IEEE Transactions on Neural Networks.

[7]  Bastian Goldlücke,et al.  Variational Analysis , 2014, Computer Vision, A Reference Guide.

[8]  Adrian S. Lewis,et al.  The [barred L]ojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems , 2006, SIAM J. Optim..

[9]  Christopher J. Rozell,et al.  A Common Network Architecture Efficiently Implements a Variety of Sparsity-Based Inference Problems , 2012, Neural Computation.

[10]  Justin K. Romberg,et al.  Convergence and Rate Analysis of Neural Networks for Sparse Approximation , 2011, IEEE Transactions on Neural Networks and Learning Systems.

[11]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[12]  Richard G. Baraniuk,et al.  Sparse Coding via Thresholding and Local Competition in Neural Circuits , 2008, Neural Computation.

[13]  F. Clarke Optimization And Nonsmooth Analysis , 1983 .