Learning classes of efficient codes

We are interested in leaning efficient codes to represent classes of different images. The image classes are modeled using an ICA mixture model that assumes that the data was generated by several mutually exclusive data classes whose components are a mixture of non-Gaussian sources. The parameters of the model can be adapted using an approximate expectation maximization approach to maximize the data likelihood. We demonstrate that this method can learn classes of efficient codes to represent images that contain a variety of different structures. The learned codes can be used for image compression, de-noising and classification tasks. Compared to standard image coding methods, the ICA mixture model gives better encoding results because the codes are adapted to the structure of the data.

[1]  Terrence J. Sejnowski,et al.  Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources , 1999, Neural Computation.

[2]  Song-Chun Zhu,et al.  Minimax Entropy Principle and Its Application to Texture Modeling , 1997, Neural Computation.

[3]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[4]  Alexei A. Efros,et al.  Texture synthesis by non-parametric sampling , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[5]  Terrence J. Sejnowski,et al.  Unsupervised Classification with Non-Gaussian Mixture Models Using ICA , 1998, NIPS.

[6]  Yoshua Bengio,et al.  High quality document image compression with "DjVu" , 1998, J. Electronic Imaging.

[7]  Eero P. Simoncelli,et al.  Image compression via joint statistical characterization in the wavelet domain , 1999, IEEE Trans. Image Process..

[8]  Bruno A. Olshausen,et al.  PROBABILISTIC FRAMEWORK FOR THE ADAPTATION AND COMPARISON OF IMAGE CODES , 1999 .

[9]  Terrence J. Sejnowski,et al.  Learning Overcomplete Representations , 2000, Neural Computation.

[10]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[11]  Zoubin Ghahramani,et al.  Learning Nonlinear Dynamical Systems Using an EM Algorithm , 1998, NIPS.

[12]  Terrence J. Sejnowski,et al.  ICA Mixture Models for Unsupervised Classification of Non-Gaussian Classes and Automatic Context Switching in Blind Signal Separation , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Erkki Oja,et al.  Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation , 1998, NIPS.

[14]  Paul A. Viola,et al.  A Non-Parametric Multi-Scale Statistical Model for Natural Images , 1997, NIPS.