Real-time Robust Principal Components' Pursuit

In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. In fact we also discuss how we can potentially use the correlation to our advantage in future work. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the “noise” seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution “Real-time Robust Principal Components' Pursuit”. It still requires the singular vectors of the low rank part to be spread out, but it does not require i.i.d.-ness of either the sparse part or the low rank part.

[1]  Wei Lu,et al.  Modified Basis Pursuit Denoising(modified-BPDN) for noisy compressive sensing with partially known support , 2009, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[2]  MaYi,et al.  Dense error correction via l1-minimization , 2010 .

[3]  John Wright,et al.  Dense Error Correction Via $\ell^1$-Minimization , 2010, IEEE Transactions on Information Theory.

[4]  Danijel Skocaj,et al.  Weighted and robust incremental method for subspace learning , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[5]  Bhaskar D. Rao,et al.  Algorithms for robust linear regression by exploiting the connection to sparse signal recovery , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[6]  B. Ripley,et al.  Robust Statistics , 2018, Encyclopedia of Mathematical Geosciences.

[7]  Yi Ma,et al.  Robust principal component analysis? , 2009, JACM.

[8]  Pablo A. Parrilo,et al.  Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..

[9]  John Wright,et al.  Dense Error Correction via L1-Minimization , 2008, 0809.0199.

[10]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[11]  Namrata Vaswani,et al.  Modified-CS: Modifying compressive sensing for problems with partially known support , 2009, ISIT.

[12]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[13]  Rama Chellappa,et al.  Robust regression using sparse learning for high dimensional parameter estimation problems , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[14]  Peter J. Huber,et al.  Robust Statistics , 2005, Wiley Series in Probability and Statistics.

[15]  Sam T. Roweis,et al.  EM Algorithms for PCA and SPCA , 1997, NIPS.

[16]  Alan L. Yuille,et al.  Robust principal component analysis by self-organizing rules based on statistical physics approach , 1995, IEEE Trans. Neural Networks.

[17]  Wei Lu,et al.  Modified-CS: Modifying compressive sensing for problems with partially known support , 2009, 2009 IEEE International Symposium on Information Theory.

[18]  Richard G. Baraniuk,et al.  Exact signal recovery from sparsely corrupted measurements through the Pursuit of Justice , 2009, 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers.

[19]  Michael J. Black,et al.  A Framework for Robust Subspace Learning , 2003, International Journal of Computer Vision.

[20]  Xiaodong Li,et al.  Dense error correction for low-rank matrices via Principal Component Pursuit , 2010, 2010 IEEE International Symposium on Information Theory.

[21]  Michael J. Black,et al.  Robust Principal Component Analysis for Computer Vision , 2001, ICCV.

[22]  Jason Morphett,et al.  An integrated algorithm of incremental and robust PCA , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[23]  A. Willsky,et al.  Sparse and low-rank matrix decompositions , 2009, 2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[24]  Shie Mannor,et al.  High dimensional Principal Component Analysis with contaminated data , 2009, 2009 IEEE Information Theory Workshop on Networking and Information Theory.

[25]  Michael J. Black,et al.  Robust principal component analysis for computer vision , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[26]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[27]  Matthew Brand,et al.  Incremental Singular Value Decomposition of Uncertain Data with Missing Values , 2002, ECCV.