Nonlinear ICA through low-complexity autoencoders

We train autoencoders by flat minimum search (FMS), a regularizer algorithm for finding low-complexity networks describable by few bits of information. As a by-product, this encourages nonlinear independent component analysis (ICA) and sparse codes of the input data.

[1]  Schuster,et al.  Separation of a mixture of independent signals using time delayed correlations. , 1994, Physical review letters.

[2]  Jürgen Schmidhuber,et al.  Flat Minima , 1997, Neural Computation.

[3]  G.G. Langdon,et al.  Data compression , 1988, IEEE Potentials.

[4]  Harri Lappalainen,et al.  Ensemble learning for independent component analysis , 1999 .

[5]  M. Kramer Nonlinear principal component analysis using autoassociative neural networks , 1991 .

[6]  Garrison W. Cottrell,et al.  Non-Linear Dimensionality Reduction , 1992, NIPS.

[7]  J. Cardoso,et al.  Blind beamforming for non-gaussian signals , 1993 .

[8]  Terrence J. Sejnowski,et al.  Blind separation and blind deconvolution: an information-theoretic approach , 1995, 1995 International Conference on Acoustics, Speech, and Signal Processing.

[9]  Peter Földiák,et al.  Sparse coding in the primate cortex , 1998 .

[10]  Néstor Parga,et al.  Redundancy Reduction and Independent Component Analysis: Conditions on Cumulants and Adaptive Approaches , 1997, Neural Computation.

[11]  Zhaoping Li,et al.  A Theory of the Visual Motion Coding in the Primary Visual Cortex , 1996, Neural Computation.

[12]  Ali Mansour,et al.  Blind Separation of Sources , 1999 .

[13]  Christian Jutten,et al.  Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture , 1991, Signal Process..

[14]  Pierre Comon Independent component analysis - a new concept? signal processing , 1994 .

[15]  Jürgen Schmidhuber,et al.  Semilinear Predictability Minimization Produces Well-Known Feature Detectors , 1996, Neural Computation.

[16]  Corso Elvezia,et al.  Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability , 1997 .

[17]  Bruno A. Olshausen,et al.  Inferring Sparse, Overcomplete Image Codes Using an Efficient Coding Framework , 1998, NIPS.

[18]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[19]  Jürgen Schmidhuber,et al.  Learning Factorial Codes by Predictability Minimization , 1992, Neural Computation.

[20]  Günther Palm,et al.  On the Information Storage Capacity of Local Learning Rules , 1992, Neural Computation.

[21]  Petteri Pajunen,et al.  Blind Source Separation Of Natural Signals Based On Approximate Complexity Minimization , 1999 .

[22]  Jürgen Schmidhuber,et al.  Feature Extraction Through LOCOCODE , 1999, Neural Computation.

[23]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[24]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[25]  Jürgen Schmidhuber,et al.  LOCOCODE versus PCA and ICA , 1998 .

[26]  Jürgen Schmidhuber,et al.  Unsupervised Coding with LOCOCODE , 1997, ICANN.

[27]  Jürgen Schmidhuber,et al.  Source Separation as a By-Product of Regularization , 1998, NIPS.

[28]  Shun-ichi Amari,et al.  Adaptive Online Learning Algorithms for Blind Separation: Maximum Entropy and Minimum Mutual Information , 1997, Neural Computation.

[29]  Andrzej Cichocki,et al.  A New Learning Algorithm for Blind Signal Separation , 1995, NIPS.

[30]  Sepp Hochreiter,et al.  Low-Complexity Coding and Decoding , 1997 .

[31]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[32]  Horace Barlow,et al.  Understanding Natural Vision , 1983 .