Some results concerning rank-one truncated steepest descent directions in tensor spaces

The idea of finding low-rank solutions to matrix or tensor optimization tasks by greedy rank-one methods has been showing itself repeatedly in the literature. The simplest method, and often a central building block in accelerated methods, consists in performing updates along low-rank approximations of the negative gradient. This is convenient as it does increase the rank in a prescribed manner per step, and also because it allows for a somewhat surprisingly simple convergence analysis. The main point is that in a tensor product space of finite dimension, the best rank-one approximation of a tensor has a guaranteed minimal overlap with the tensor itself. Thus rank-one approximations of anti-gradients provide descent directions. This key concept can also be used in Hilbert space, if the rank growth of the approximation sequence can be balanced with convergence speed. This work presents a conceptual review of this approach, and also provides some new insights.

[1]  Jieping Ye,et al.  Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion , 2014, SIAM J. Sci. Comput..

[2]  Liqi Wang,et al.  On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors , 2014, SIAM J. Matrix Anal. Appl..

[3]  A. Uschmajew,et al.  A new convergence proof for the higher-order power method and generalizations , 2014, 1407.4586.

[4]  A. Uschmajew,et al.  On low-rank approximability of solutions to high-dimensional operator equations and eigenvalue problems , 2014, 1406.7026.

[5]  Ivor W. Tsang,et al.  Riemannian Pursuit for Big Matrix Recovery , 2014, ICML.

[6]  S. V. Dolgov,et al.  ALTERNATING MINIMAL ENERGY METHODS FOR LINEAR SYSTEMS IN HIGHER DIMENSIONS∗ , 2014 .

[7]  André Uschmajew,et al.  Line-search methods and rank increase on low-rank matrix varieties , 2014 .

[8]  Christopher J. Hillar,et al.  Most Tensor Problems Are NP-Hard , 2009, JACM.

[9]  Liqun Qi,et al.  The Best Rank-One Approximation Ratio of a Tensor Space , 2011, SIAM J. Matrix Anal. Appl..

[10]  Ohad Shamir,et al.  Large-Scale Convex Minimization with a Low-Rank Constraint , 2011, ICML.

[11]  Antonio Falcó,et al.  A Proper Generalized Decomposition for the solution of elliptic problems in abstract form by using a functional Eckart–Young approach , 2011 .

[12]  F. Chinesta,et al.  On the Convergence of a Greedy Rank-One Update Algorithm for a Class of Linear Systems , 2010 .

[13]  Tamara G. Kolda,et al.  Tensor Decompositions and Applications , 2009, SIAM Rev..

[14]  Pierre-Antoine Absil,et al.  Accelerated Line-search and Trust-region Methods , 2009, SIAM J. Numer. Anal..

[15]  Vin de Silva,et al.  Tensor rank and the ill-posedness of the best low-rank approximation problem , 2006, math/0607647.

[16]  Francisco Chinesta,et al.  A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids , 2006 .

[17]  Lars Grasedyck,et al.  Existence and Computation of Low Kronecker-Rank Approximations for Large Linear Systems of Tensor Product Structure , 2004, Computing.

[18]  D K Smith,et al.  Numerical Optimization , 2001, J. Oper. Res. Soc..

[19]  Joos Vandewalle,et al.  On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors , 2000, SIAM J. Matrix Anal. Appl..

[20]  Joos Vandewalle,et al.  A Multilinear Singular Value Decomposition , 2000, SIAM J. Matrix Anal. Appl..

[21]  P. Comon,et al.  Higher-order power method - application in independent component analysis , 1995 .

[22]  L. Kantorovich,et al.  Functional analysis and applied mathematics , 1963 .