Blind Nonnegative Source Separation Using Biological Neural Networks

Blind source separation—the extraction of independent sources from a mixture—is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative—for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the data set is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.

[1]  T. Bliss,et al.  Long‐lasting potentiation of synaptic transmission in the dentate area of the unanaesthetized rabbit following stimulation of the perforant path , 1973, The Journal of physiology.

[2]  S. Nelson,et al.  Potentiation of cortical inhibition by visual deprivation , 2006, Nature.

[3]  Chris H. Q. Ding,et al.  Symmetric Nonnegative Matrix Factorization for Graph Clustering , 2012, SDM.

[4]  Aapo Hyvärinen,et al.  Fast and robust fixed-point algorithms for independent component analysis , 1999, IEEE Trans. Neural Networks.

[5]  Y. Komatsu,et al.  Age-dependent long-term potentiation of inhibitory synaptic transmission in rat visual cortex , 1994, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[6]  Pierre Comon,et al.  Handbook of Blind Source Separation: Independent Component Analysis and Applications , 2010 .

[7]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[8]  Ralph Linsker,et al.  A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions , 1997, Neural Computation.

[9]  Haesun Park,et al.  SymNMF: nonnegative low-rank approximation of a similarity matrix for graph clustering , 2014, Journal of Global Optimization.

[10]  T. Bliss,et al.  Long‐lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path , 1973, The Journal of physiology.

[11]  Mark D. Plumbley ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA , 2001 .

[12]  Dmitri B. Chklovskii,et al.  A Hebbian/Anti-Hebbian network derived from online non-negative matrix factorization can cluster and discover sparse features , 2014, 2014 48th Asilomar Conference on Signals, Systems and Computers.

[13]  Mark D. Plumbley A Subspace Network That Determines Its Own Output Dimension , 1994 .

[14]  Christian Jutten,et al.  Non-negative Independent Component Analysis Algorithm Based on 2D Givens Rotations and a Newton Optimization , 2010, LVA/ICA.

[15]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[16]  D. Chakrabarti,et al.  A fast fixed - point algorithm for independent component analysis , 1997 .

[17]  Taro Toyoizumi,et al.  A Local Learning Rule for Independent Component Analysis , 2016, Scientific Reports.

[18]  P. Paatero,et al.  Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values† , 1994 .

[19]  Michael R. Lyu,et al.  Nonnegative independent component analysis based on minimizing mutual information technique , 2006, Neurocomputing.

[20]  Mark D. Plumbley,et al.  Theorems on Positive Data: On the Uniqueness of NMF , 2008, Comput. Intell. Neurosci..

[21]  Tao Hu,et al.  A Hebbian/Anti-Hebbian network for online sparse dictionary learning derived from symmetric matrix factorization , 2014, 2014 48th Asilomar Conference on Signals, Systems and Computers.

[22]  Christian Jutten,et al.  Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture , 1991, Signal Process..

[23]  Richard G. Baraniuk,et al.  A Probabilistic Framework for Deep Learning , 2016, NIPS.

[24]  Dmitri B. Chklovskii,et al.  A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks , 2015, NIPS.

[25]  Cerebellar glomeruli : Does limited extracellular calcium implement a sparse encoding strategy ? , 2001 .

[26]  Erkki Oja,et al.  A FastICA Algorithm for Non-negative Independent Component Analysis , 2004, ICA.

[27]  Virginia Best,et al.  Cortical interference effects in the cocktail party problem , 2007, Nature Neuroscience.

[28]  M. A. Bee,et al.  The cocktail party problem: what is it? How can it be solved? And why should animal behaviorists study it? , 2008, Journal of comparative psychology.

[29]  Erkki Oja,et al.  A "nonnegative PCA" algorithm for independent component analysis , 2004, IEEE Transactions on Neural Networks.

[30]  Erkki Oja,et al.  Blind Separation of Positive Sources by Globally Convergent Gradient Search , 2004, Neural Computation.

[31]  Aapo Hyvärinen,et al.  Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces , 2000, Neural Computation.

[32]  Erkki Oja,et al.  Independent component analysis by general nonlinear Hebbian-like learning rules , 1998, Signal Process..

[33]  Dmitri B. Chklovskii,et al.  Optimization theory of Hebbian/anti-Hebbian networks for PCA and whitening , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[34]  David R. Bull Communicating Pictures—The Future , 2014 .

[35]  Erkki Oja,et al.  Independent component analysis: algorithms and applications , 2000, Neural Networks.

[36]  Tao Hu,et al.  A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data , 2015, Neural Computation.

[37]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[38]  Kiyoshi Kotani,et al.  Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle , 2015, PLoS Comput. Biol..

[39]  Victoria Stodden,et al.  When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts? , 2003, NIPS.

[40]  N. Mesgarani,et al.  Selective cortical representation of attended speaker in multi-talker speech perception , 2012, Nature.

[41]  Stephen J. Wright Coordinate descent algorithms , 2015, Mathematical Programming.

[42]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[43]  J. Gallant,et al.  Natural Stimulus Statistics Alter the Receptive Field Structure of V1 Neurons , 2004, The Journal of Neuroscience.

[44]  M. Bear,et al.  LTP and LTD An Embarrassment of Riches , 2004, Neuron.

[45]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.

[46]  Mark D. Plumbley Conditions for nonnegative independent component analysis , 2002, IEEE Signal Processing Letters.

[47]  M D Plumbley Information processing in negative feedback neural networks. , 1996, Network.

[48]  Yuanzhi Li,et al.  Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates , 2016, NIPS.

[49]  Ralph Linsker,et al.  Self-organization in a perceptual network , 1988, Computer.

[50]  D. Poeppel,et al.  Mechanisms Underlying Selective Neuronal Tracking of Attended Speech at a “Cocktail Party” , 2013, Neuron.

[51]  Andrzej Cichocki,et al.  A New Learning Algorithm for Blind Signal Separation , 1995, NIPS.

[52]  Nikos D. Sidiropoulos,et al.  Non-Negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition , 2014, IEEE Transactions on Signal Processing.

[53]  Josh H. McDermott The cocktail party problem , 2009, Current Biology.

[54]  Barak A. Pearlmutter,et al.  Sparse Representations for the Cocktail Party Problem , 2006, The Journal of Neuroscience.

[55]  D. Kullmann,et al.  Plasticity of Inhibition , 2012, Neuron.

[56]  Mark D. Plumbley Algorithms for nonnegative independent component analysis , 2003, IEEE Trans. Neural Networks.