A Smooth Approximation Algorithm of Rank-Regularized Optimization Problem and Its Applications

In this paper, we propose a novel smooth approximation algorithm of rank-regularized optimization problem. Rank has been a popular candidate of regularization for image processing problems,especially for images with periodical textures. But the low-rank optimization is difficult to solve, because the rank is nonconvex and can’t be formulated in closed form. The most popular methods is to adopt the nuclear norm as the approximation of rank, but the optimization of nuclear norm is also hard, it’s time expensive as it needs computing the singular value decomposition at each iteration. In this paper, we propose a novel direct regularization method to solve the low-rank optimization. Contrast to the nuclear-norm approximation, a continuous approximation for rank regularization is proposed. The new method proposed in this paper is a ‘direct’ solver to the rank regularization, and it just need computing the singular value decomposition one time, so it’s more efficient. We analyze the choosing criteria of parameters, and propose an adaptive algorithm based on Morozov discrepancy principle. Finally, numerical experiments have been done to show the efficiency of our algorithm and the performance on applications of image denoising.

[1]  Donald Goldfarb,et al.  Robust Low-Rank Tensor Recovery: Models and Algorithms , 2013, SIAM J. Matrix Anal. Appl..

[2]  Emmanuel J. Candès,et al.  A Singular Value Thresholding Algorithm for Matrix Completion , 2008, SIAM J. Optim..

[3]  Gonglin Yuan,et al.  Inexact Accelerated Proximal Gradient Algorithms For Matrix l_{2,1}-Norm Minimization Problem in Multi-Task Feature Learning , 2014 .

[4]  Jian-Feng Cai,et al.  Linearized Bregman iterations for compressed sensing , 2009, Math. Comput..

[5]  Arvind Ganesh,et al.  Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix , 2009 .

[6]  Yin Zhang,et al.  Fixed-Point Continuation for l1-Minimization: Methodology and Convergence , 2008, SIAM J. Optim..

[7]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[8]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[9]  Wotao Yin,et al.  Bregman Iterative Algorithms for (cid:2) 1 -Minimization with Applications to Compressed Sensing ∗ , 2008 .

[10]  John Wright,et al.  Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization , 2009, NIPS.

[11]  Zhixun Su,et al.  Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation , 2011, NIPS.

[12]  Arvind Ganesh,et al.  Fast algorithms for recovering a corrupted low-rank matrix , 2009, 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[13]  Stanley Osher,et al.  Total variation based image restoration with free local constraints , 1994, Proceedings of 1st International Conference on Image Processing.

[14]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..

[15]  Pablo A. Parrilo,et al.  Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..

[16]  L. Rudin,et al.  Nonlinear total variation based noise removal algorithms , 1992 .

[17]  Zhang Yi,et al.  fLRR: fast low-rank representation using Frobenius-norm , 2014 .

[18]  Li Xu,et al.  Structure extraction from texture via relative total variation , 2012, ACM Trans. Graph..