Sparse coding with a global connectivity constraint

Basis pursuit via sparse coding techniques have generally enforced sparseness by using L1-type norms on the coefficients of the bases. When applied to natural scenes these algorithms famously retrieve the Gabor-like basis functions of the primary visual cortex (V1) of the mammalian brain. In this paper, inspired further by the architecture of the brain, we propose a technique that not only retrieves the Gabor basis but does so respecting global power-law type connectivity patterns. Such global constraints are beneficial from a biological perspective in terms of efficient wiring, robustness etc. We draw on the similarity between sparse coding and neural networks to formulate the problem and impose such global connectivity patterns.

[1]  Marc'Aurelio Ranzato,et al.  Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.

[2]  Liang-Tien Chia,et al.  Laplacian Sparse Coding, Hypergraph Laplacian Sparse Coding, and Applications , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Roland Baddeley,et al.  An efficient code in V1? , 1996, Nature.

[4]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[5]  Duncan J. Watts,et al.  Collective dynamics of ‘small-world’ networks , 1998, Nature.

[6]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[7]  H. B. Barlow,et al.  Possible Principles Underlying the Transformations of Sensory Messages , 2012 .

[8]  K. Lange,et al.  The MM Alternative to EM , 2010, 1104.2203.

[9]  Andrew Y. Ng,et al.  Unsupervised learning models of primary cortical receptive fields and receptive field plasticity , 2011, NIPS.

[10]  Tibério S. Caetano,et al.  A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation , 2012, NIPS.

[11]  Alessandro Vespignani,et al.  Characterization and modeling of weighted networks , 2005 .

[12]  Jeffrey S. Perry,et al.  Statistics for optimal point prediction in natural images. , 2011, Journal of vision.

[13]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[14]  R. Baddeley Visual perception. An efficient code in V1? , 1996, Nature.

[15]  Filip Piekniewski,et al.  Theoretical Model for Mesoscopic-Level Scale-Free Self-Organization of Functional Brain Networks , 2010, IEEE Transactions on Neural Networks.

[16]  Michael S. Lewicki,et al.  Efficient auditory coding , 2006, Nature.

[17]  Mark W. Schmidt,et al.  Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches , 2007, ECML.

[18]  G. Cecchi,et al.  Scale-free brain functional networks. , 2003, Physical review letters.

[19]  Peter Földiák,et al.  SPARSE CODING IN THE PRIMATE CORTEX , 2002 .

[20]  Qiang Liu,et al.  Learning Scale Free Networks by Reweighted L1 regularization , 2011, AISTATS.

[21]  Hidefumi Sawai A Small-World Network Immune from Random Failures and Resilient to Targeted Attacks , 2013, ICCS.

[22]  S. R. Lopes,et al.  Chaotic phase synchronization in scale-free networks of bursting neurons. , 2007, Physical review. E, Statistical, nonlinear, and soft matter physics.