Sparse approximation on energy efficient hardware

Physically and computationally efficient hardware coupled with fast sparse approximation solvers provide opportunities for real-time visual processing on low-power embedded platforms. This paper presents a system using the low-power Locally Competitive Algorithm (LCA) on the highly programmable, brain-inspired IBM TrueNorth chip. A small-scale spiking LCA network is successfully implemented on the TrueNorth chip, resulting in node dynamics comparable to that of a discretized LCA network. The sparse representations computed by the LCA implemented on TrueNorth result in minimal reconstruction error for every trial. This performance is achieved using only 11 of the available 4096 cores on the chip, offering the potential for scalability for real-world applications of this system.

[1]  Justin K. Romberg,et al.  Convergence and Rate Analysis of Neural Networks for Sparse Approximation , 2011, IEEE Transactions on Neural Networks and Learning Systems.

[2]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[3]  M. Salman Asif,et al.  Dynamic Updating for � � Minimization , 2010 .

[4]  M. R. Osborne,et al.  A new approach to variable selection in least squares problems , 2000 .

[5]  Andrew S. Cassidy,et al.  Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[6]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[7]  Andrew S. Cassidy,et al.  Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[8]  Stephen P. Boyd,et al.  Graph Implementations for Nonsmooth Convex Programs , 2008, Recent Advances in Learning and Control.

[9]  Emmanuel J. Candès,et al.  NESTA: A Fast and Accurate First-Order Method for Sparse Recovery , 2009, SIAM J. Imaging Sci..

[10]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[11]  I. Daubechies,et al.  An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.

[12]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[13]  Richard G. Baraniuk,et al.  Sparse Coding via Thresholding and Local Competition in Neural Circuits , 2008, Neural Computation.

[14]  Christopher J. Rozell,et al.  Optimal Sparse Approximation with Integrate and Fire Neurons , 2014, Int. J. Neural Syst..

[15]  Michael Elad,et al.  L1-L2 Optimization in Signal and Image Processing , 2010, IEEE Signal Processing Magazine.

[16]  Mário A. T. Figueiredo,et al.  Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems , 2007, IEEE Journal of Selected Topics in Signal Processing.

[17]  Justin K. Romberg,et al.  Dynamic Updating for $\ell_{1}$ Minimization , 2009, IEEE Journal of Selected Topics in Signal Processing.

[18]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[19]  Michael P. Friedlander,et al.  Probing the Pareto Frontier for Basis Pursuit Solutions , 2008, SIAM J. Sci. Comput..

[20]  Allen Y. Yang,et al.  Fast L1-Minimization Algorithms For Robust Face Recognition , 2010, 1007.3753.

[21]  Wotao Yin,et al.  Bregman Iterative Algorithms for (cid:2) 1 -Minimization with Applications to Compressed Sensing ∗ , 2008 .

[22]  Berwin A. Turlach,et al.  On algorithms for solving least squares problems under an L1 penalty or an L1 constraint , 2005 .

[23]  Marco Wiering,et al.  2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) , 2011, IJCNN 2011.

[24]  E. Candes,et al.  11-magic : Recovery of sparse signals via convex programming , 2005 .

[25]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[26]  T. Blumensath,et al.  Iterative Thresholding for Sparse Approximations , 2008 .

[27]  Stephen J. Wright,et al.  Sparse Reconstruction by Separable Approximation , 2008, IEEE Transactions on Signal Processing.

[28]  Christopher J. Rozell,et al.  Configurable hardware integrate and fire neurons for sparse approximation , 2013, Neural Networks.

[29]  Dmitry M. Malioutov,et al.  Homotopy continuation for sparse signal representation , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[30]  Yin Zhang,et al.  Fixed-Point Continuation for l1-Minimization: Methodology and Convergence , 2008, SIAM J. Optim..

[31]  Stephen P. Boyd,et al.  An Interior-Point Method for Large-Scale $\ell_1$-Regularized Least Squares , 2007, IEEE Journal of Selected Topics in Signal Processing.