Supervised dimensionality reduction using mixture models

Given a classification problem, our goal is to find a low-dimensional linear transformation of the feature vectors which retains information needed to predict the class labels. We present a method based on maximum conditional likelihood estimation of mixture models. Use of mixture models allows us to approximate the distributions to any desired accuracy while use of conditional likelihood as the contrast function ensures that the selected subspace retains maximum possible mutual information between feature vectors and class labels. Classification experiments using Gaussian mixture components show that this method compares favorably to related dimension reduction techniques. Other distributions belonging to the exponential family can be used to reduce dimensions when data is of a special type, for example binary or integer valued data. We provide an EM-like algorithm for model estimation and present visualization experiments using Gaussian and Bernoulli mixture models.

[1]  J. Friedman,et al.  Projection Pursuit Regression , 1981 .

[2]  P. McCullagh,et al.  Generalized Linear Models , 1984 .

[3]  Anne Lohrli Chapman and Hall , 1985 .

[4]  Stephen M. Omohundro,et al.  Efficient Algorithms with Neural Network Behavior , 1987, Complex Syst..

[5]  S. Lemeshow,et al.  Predicting the Outcome of Intensive Care Unit Patients , 1988 .

[6]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[7]  Ker-Chau Li,et al.  Sliced Inverse Regression for Dimension Reduction , 1991 .

[8]  Ker-Chau Li,et al.  On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma , 1992 .

[9]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[10]  R. Tibshirani,et al.  Discriminant Analysis by Gaussian Mixtures , 1996 .

[11]  Michael E. Tipping Probabilistic Visualisation of High-Dimensional Binary Data , 1998, NIPS.

[12]  Alex Pentland,et al.  Maximum Conditional Likelihood via Bound Maximization and the CEM Algorithm , 1998, NIPS.

[13]  Michael E. Tipping,et al.  Probabilistic Principal Component Analysis , 1999 .

[14]  Manfred K. Warmuth,et al.  Relative Expected Instantaneous Loss Bounds , 2000, J. Comput. Syst. Sci..

[15]  William M. Campbell,et al.  Mutual Information in Learning Feature Transformations , 2000, ICML.

[16]  Ata Kabán,et al.  A Combined Latent Class and Trait Model for the Analysis and Visualization of Discrete Data , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Sanjoy Dasgupta,et al.  A Generalization of Principal Components Analysis to the Exponential Family , 2001, NIPS.

[18]  Alon Orlitsky,et al.  Discriminative Gaussian Mixture Models: A Comparison with Kernel Classifiers , 2003, ICML.

[19]  Naftali Tishby,et al.  Sufficient Dimensionality Reduction , 2003, J. Mach. Learn. Res..

[20]  Manfred K. Warmuth,et al.  Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions , 1999, Machine Learning.

[21]  Michael I. Jordan,et al.  Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces , 2004, J. Mach. Learn. Res..

[22]  Alon Orlitsky,et al.  Semi-parametric Exponential Family PCA , 2004, NIPS.

[23]  Susan A. Murphy,et al.  Monographs on statistics and applied probability , 1990 .