Latent Schatten TT Norm for Tensor Completion

Tensor completion arouses much attention in signal processing and machine learning. The tensor train (TT) decomposition has shown better performances than the Tucker decomposition in image and video inpainting. In this paper, we propose a novel tensor completion model based on a newly defined latent Schatten TT norm. Then, the statistical performance is analyzed by establishing a non-asymptotic upper bound on the estimation error. Further, a scalable algorithm is developed to efficiently solve the model. Experimental results of color image inpainting demonstrate that the proposed norm has promising performances compared to other variants of Schatten norm.

[1]  Tamara G. Kolda,et al.  Tensor Decompositions and Applications , 2009, SIAM Rev..

[2]  Misha Elena Kilmer,et al.  Third-Order Tensors as Operators on Matrices: A Theoretical and Computational Framework with Applications in Imaging , 2013, SIAM J. Matrix Anal. Appl..

[3]  Jieping Ye,et al.  Tensor Completion for Estimating Missing Values in Visual Data , 2013, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Masashi Sugiyama,et al.  Multitask learning meets tensor factorization: task imputation via convex optimization , 2014 .

[5]  Donald Goldfarb,et al.  Robust Low-Rank Tensor Recovery: Models and Algorithms , 2013, SIAM J. Matrix Anal. Appl..

[6]  Jianting Cao,et al.  High-Order Tensor Completion for Data Recovery via Sparse Tensor-Train Optimization , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[7]  Emmanuel J. Candès,et al.  A Singular Value Thresholding Algorithm for Matrix Completion , 2008, SIAM J. Optim..

[8]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[9]  Hisashi Kashima,et al.  Statistical Performance of Convex Tensor Decomposition , 2011, NIPS.

[10]  J. Chang,et al.  Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition , 1970 .

[11]  Ming Yuan,et al.  Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion , 2016, IEEE Transactions on Information Theory.

[12]  Takanori Maehara,et al.  On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm , 2017, NIPS.

[13]  Volkan Cevher,et al.  A Universal Primal-Dual Convex Optimization Framework , 2015, NIPS.

[14]  Bo Wang,et al.  Tensor Completion Using Spectral $(k,p)$ -Support Norm , 2018, IEEE Access.

[15]  V. Aggarwal,et al.  Efficient Low Rank Tensor Ring Completion , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[16]  Bo Wang,et al.  Noisy Low-Tubal-Rank Tensor Completion Through Iterative Singular Tube Thresholding , 2018, IEEE Access.

[17]  T. Tony Cai,et al.  Matrix completion via max-norm constrained optimization , 2013, ArXiv.

[18]  Andong Wang,et al.  Noisy low-tubal-rank tensor completion , 2019, Neurocomputing.

[19]  Ivan Oseledets,et al.  Tensor-Train Decomposition , 2011, SIAM J. Sci. Comput..

[20]  Liqing Zhang,et al.  Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[21]  Liqing Zhang,et al.  Tensor Ring Decomposition , 2016, ArXiv.

[22]  Martin J. Wainwright,et al.  Restricted strong convexity and weighted matrix completion: Optimal bounds with noise , 2010, J. Mach. Learn. Res..

[23]  Minh N. Do,et al.  Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train , 2016, IEEE Transactions on Image Processing.

[24]  Taiji Suzuki,et al.  Convex Tensor Decomposition via Structured Schatten Norm Regularization , 2013, NIPS.

[25]  Bamdev Mishra,et al.  A Dual Framework for Low-rank Tensor Completion , 2017, NeurIPS.

[26]  O. Klopp Noisy low-rank matrix completion with general sampling distribution , 2012, 1203.0108.

[27]  Makoto Yamada,et al.  Convex Coupled Matrix and Tensor Completion , 2018, Neural Computation.