Replicator neural networks for universal optimal source coding.

Replicator neural networks self-organize by using their inputs as desired outputs; they internally form a compressed representation for the input data. A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error (for instance, by training on raw data examples), carry out optimal data compression for arbitrary data vector sources. Data manifolds, a new general model of data sources, are then introduced and a second theorem shows that, in a practically important limiting case, optimal-compression replicator networks operate by creating an essentially unique natural coordinate system for the manifold.

[1]  W. R. Bennett,et al.  Spectra of quantized signals , 1948, Bell Syst. Tech. J..

[2]  T. Kohonen,et al.  A principle of neural associative memory , 1977, Neuroscience.

[3]  Allen Gersho,et al.  Asymptotically optimal block quantization , 1979, IEEE Trans. Inf. Theory.

[4]  Paul L. Zador,et al.  Asymptotic quantization error of continuous signals and the quantization dimension , 1982, IEEE Trans. Inf. Theory.

[5]  Thomas R. Fischer,et al.  Geometric source coding and vector quantization , 1989, IEEE Trans. Inf. Theory.

[6]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[7]  M. Kramer Nonlinear principal component analysis using autoassociative neural networks , 1991 .

[8]  Peter F. Swaszek,et al.  A vector quantizer for the Laplace source , 1991, IEEE Trans. Inf. Theory.

[9]  Geoffrey E. Hinton,et al.  Self-organizing neural network that discovers surfaces in random-dot stereograms , 1992, Nature.

[10]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[11]  Andrew R. Barron,et al.  Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.

[12]  Jerry D. Gibson,et al.  Uniform and piecewise uniform lattice vector quantization for memoryless Gaussian and Laplacian sources , 1993, IEEE Trans. Inf. Theory.

[13]  Shun-ichi Amari,et al.  Statistical Theory of Learning Curves under Entropic Loss Criterion , 1993, Neural Computation.

[14]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.