Alternating direction and Taylor expansion minimization algorithms for unconstrained nuclear norm optimization

In the past decade, robust principal component analysis (RPCA) and low-rank matrix completion (LRMC), as two very important optimization problems with the view of recovering original low-rank matrix from sparsely and highly corrupted observations or a subset of its entries, have already been successfully adopted in image denoising, video processing, web search, biological information, etc. This paper proposes an efficient and effective algorithm, named the alternating direction and step size minimization (ADSM) algorithm, which employs the alternating direction minimization idea to solve the general relaxed model that can describe small noise (e.g., Gaussian noise). The coupling of sparse noise and small noise makes low-rank matrix recovery more challenging than that of RPCA. We make use of Taylor expansion, singular value decomposition and shrinkage operator as the alternating direction minimization method to deduce iterative direction matrices. A continuous technology is incorporated into ADSM to accelerate convergence. Similarly, the Taylor expansion and step size minimization (TESM) algorithm for LRMC is designed by the above way, but the alternating direction minimization idea needs to be ruled out since there is not a sparse matrix in it. Theoretically, it is proved that the two algorithms globally converge to their respective optimal points based on some conditions. The numerical results are reported, illustrating that ADSM and TESM are quite efficient and effective for recovering large-scale low-rank matrix problems at many cases.

[1]  Arvind Ganesh,et al.  Fast algorithms for recovering a corrupted low-rank matrix , 2009, 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[2]  Yi Ma,et al.  Robust principal component analysis? , 2009, JACM.

[3]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..

[4]  Yunhai Xiao,et al.  Nonmonotone Barzilai–Borwein Gradient Algorithm for $$\ell _{1}$$ℓ1-Regularized Nonsmooth Minimization in Compressive Sensing , 2012, J. Sci. Comput..

[5]  John Wright,et al.  Decomposing background topics from keywords by principal component pursuit , 2010, CIKM.

[6]  Arvind Ganesh,et al.  Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix , 2009 .

[7]  Petros Drineas,et al.  FAST MONTE CARLO ALGORITHMS FOR MATRICES II: COMPUTING A LOW-RANK APPROXIMATION TO A MATRIX∗ , 2004 .

[8]  Zuowei Shen,et al.  Robust video denoising using low rank matrix completion , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Lieven Vandenberghe,et al.  Interior-Point Method for Nuclear Norm Approximation with Application to System Identification , 2009, SIAM J. Matrix Anal. Appl..

[10]  Xiaoming Yuan,et al.  Matrix completion via an alternating direction method , 2012 .

[11]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[12]  S. Yun,et al.  An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems , 2009 .

[13]  D. Donoho For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution , 2006 .

[14]  Wotao Yin,et al.  A Fixed-Point Continuation Method for L_1-Regularization with Application to Compressed Sensing , 2007 .

[15]  Xiaopeng Zhang,et al.  Illumination compensation via low rank matrix completion for multiview video coding , 2013, 2013 IEEE International Conference on Image Processing.

[16]  Pablo A. Parrilo,et al.  Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..

[17]  R. O. Schmidt,et al.  Multiple emitter location and signal Parameter estimation , 1986 .

[18]  Yun-Hai Xiao,et al.  An alternating direction method for linear‐constrained matrix nuclear norm minimization , 2012, Numer. Linear Algebra Appl..

[19]  Emmanuel J. Candès,et al.  A Singular Value Thresholding Algorithm for Matrix Completion , 2008, SIAM J. Optim..

[20]  L. Grippo,et al.  A nonmonotone line search technique for Newton's method , 1986 .

[21]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[22]  John Wright,et al.  Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization , 2009, NIPS.

[23]  J. Hiriart-Urruty,et al.  Convex analysis and minimization algorithms , 1993 .

[24]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[25]  James Bennett,et al.  The Netflix Prize , 2007 .

[26]  Shiqian Ma,et al.  Fixed point and Bregman iterative methods for matrix rank minimization , 2009, Math. Program..