Robust Principal Component Analysis for Computer Vision

Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance, and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. In computer vision applications, outliers typically occur within a sample (image) due to pixels that are corrupted by noise, alignment errors, or occlusion. We review previous approaches for making PCA robust to outliers and present a new method that uses anintra-sampleoutlier process to account for pixel outliers. We develop the theory of Robust Principal Component Analysis (RPCA) and describe a robust M-estimation algorithm for learning linear multivariate representations of high dimensional data such as images. Quantitative comparisons with traditional PCA and previous robust algorithms illustrate the benefits of RPCA when outliers are present. Details of the algorithm are described and a software implementation is being made publically available.

[1]  D. Ruppert Robust Statistics: The Approach Based on Influence Functions , 1987 .

[2]  N. Campbell Robust Procedures in Multivariate Analysis I: Robust Covariance Estimation , 1980 .

[3]  Michael J. Black,et al.  Eigentracking: Robust matching and tracking of objects using view - based representation , 1998 .

[4]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[5]  Michael E. Tipping,et al.  Probabilistic Principal Component Analysis , 1999 .

[6]  Stuart Geman,et al.  Statistical methods for tomographic image reconstruction , 1987 .

[7]  David J. Fleet,et al.  Learning parameterized models of image motion , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[8]  Rajesh P. N. Rao,et al.  An optimal estimation approach to visual perception and learning , 1999, Vision Research.

[9]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[10]  Peter J. Rousseeuw,et al.  Robust regression and outlier detection , 1987 .

[11]  Harry Shum,et al.  Principal Component Analysis with Missing Data and Its Application to Polyhedral Object Modeling , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Guillermo Sapiro,et al.  Robust anisotropic diffusion , 1998, IEEE Trans. Image Process..

[13]  Sam T. Roweis,et al.  EM Algorithms for PCA and SPCA , 1997, NIPS.

[14]  Alex Pentland,et al.  Probabilistic visual learning for object detection , 1995, Proceedings of IEEE International Conference on Computer Vision.

[15]  S. Zamir,et al.  Lower Rank Approximation of Matrices by Least Squares With Any Choice of Weights , 1979 .

[16]  Alex Pentland,et al.  A Bayesian Computer Vision System for Modeling Human Interactions , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Juha Karhunen,et al.  Generalizations of principal component analysis, optimization problems, and neural networks , 1995, Neural Networks.

[18]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[19]  Michael J. Black,et al.  On the unification of line processes , 1996 .

[20]  C. Eckart,et al.  The approximation of one matrix by another of lower rank , 1936 .

[21]  Alan L. Yuille,et al.  Robust principal component analysis by self-organizing rules based on statistical physics approach , 1995, IEEE Trans. Neural Networks.

[22]  Timothy F. Cootes,et al.  Active Appearance Models , 1998, ECCV.

[23]  W. Heiser,et al.  Resistant lower rank approximation of matrices by iterative majorization , 1994 .