Redundancy and Dimensionality Reduction in Sparse-Distributed Representations of Natural Objects in Terms of Their Local Features

Low-dimensional representations are key to solving problems in high-level vision, such as face compression and recognition. Factorial coding strategies for reducing the redundancy present in natural images on the basis of their second-order statistics have been successful in accounting for both psychophysical and neurophysiological properties of early vision. Class-specific representations are presumably formed later, at the higher-level stages of cortical processing. Here we show that when retinotopic factorial codes are derived for ensembles of natural objects, such as human faces, not only redundancy, but also dimensionality is reduced. We also show that objects are built from parts in a non-Gaussian fashion which allows these local-feature codes to have dimensionalities that are substantially lower than the respective Nyquist sampling rates.

[1]  Lawrence Sirovich,et al.  The global dimensionality of face space , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[2]  Joseph J. Atick,et al.  What Does the Retina Know about Natural Scenes? , 1992, Neural Computation.

[3]  Penio S. Penev,et al.  Local feature analysis: A general statistical theory for object representation , 1996 .

[4]  Mitchell Feigenbaum,et al.  Local feature analysis: a statistical theory for information representation and transmission , 1998 .

[5]  Ralph Linsker,et al.  Self-organization in a perceptual network , 1988, Computer.

[6]  H. B. Barlow,et al.  Finding Minimum Entropy Codes , 1989, Neural Computation.

[7]  Penio S. Penev Dimensionality Reduction by Sparsification in a Local-Features Representation of Human Faces , 1999 .

[8]  Bartlett W. Mel Computational neuroscience: Think positive to find parts , 1999, Nature.

[9]  L Sirovich,et al.  Low-dimensional procedure for the characterization of human faces. , 1987, Journal of the Optical Society of America. A, Optics and image science.

[10]  Penio S. Penev,et al.  LOCAL FEATURE ANALYSIS: A FLEXIBLE STATISTICAL FRAMEWORK F OR DIMENSIONALITY REDUCTION BY SPARSIFICATION OF NATURALIST IC SOUND , 2000 .

[11]  E. Jaynes On the rationale of maximum-entropy methods , 1982, Proceedings of the IEEE.

[12]  Tomaso A. Poggio,et al.  A Sparse Representation for Function Approximation , 1998, Neural Computation.

[13]  William Bialek,et al.  Statistics of Natural Images: Scaling in the Woods , 1993, NIPS.

[14]  Alex Pentland,et al.  Probabilistic Visual Learning for Object Representation , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.