Geometrizing Local Rates of Convergence for High-Dimensional Linear Inverse Problems

This paper presents a unified theoretical framework for the analysis of a general ill-posed linear inverse model which includes as special cases noisy compressed sensing, sign vector recovery, trace regression, orthogonal matrix estimation, and noisy matrix completion. We propose a computationally feasible convex program for the linear inverse problem and develop a theoretical framework to characterize the local rate of convergence. The unified theory is built based on the local conic geometry and duality. The difficulty of estimation is captured by the geometric characterization of the local tangent cone through the complexity measures -- the Gaussian width and covering entropy.

[1]  R. Vershynin Estimation in High Dimensions: A Geometric Perspective , 2014, 1405.5103.

[2]  Christos Thrampoulidis,et al.  Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information , 2013, ArXiv.

[3]  Yihong Wu,et al.  Volume ratio, sparsity, and minimaxity under unitarily invariant norms , 2013, 2013 IEEE International Symposium on Information Theory.

[4]  S. Mendelson,et al.  Learning subgaussian classes : Upper and minimax bounds , 2013, 1305.4825.

[5]  Joel A. Tropp,et al.  Living on the edge: A geometric theory of phase transitions in convex optimization , 2013, ArXiv.

[6]  T. Tony Cai,et al.  Matrix completion via max-norm constrained optimization , 2013, ArXiv.

[7]  Harrison H. Zhou,et al.  Estimating Sparse Precision Matrix: Optimal Rates of Convergence and Adaptive Estimation , 2012, 1212.2882.

[8]  S. Chatterjee,et al.  Matrix estimation by Universal Singular Value Thresholding , 2012, 1212.1247.

[9]  T. Cai,et al.  Sparse PCA: Optimal rates and adaptive estimation , 2012, 1211.1309.

[10]  Suresh B. Srinivasamurthy Methods of Solving Ill-Posed Problems , 2012, 1205.5323.

[11]  Benjamin Recht,et al.  Probability of unique integer solution to a system of linear equations , 2011, Eur. J. Oper. Res..

[12]  Emmanuel J. Candès,et al.  Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements , 2011, IEEE Transactions on Information Theory.

[13]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[14]  V. Koltchinskii,et al.  Nuclear norm penalization and optimal rates for noisy low rank matrix completion , 2010, 1011.6256.

[15]  V. Koltchinskii Von Neumann Entropy Penalization and Low Rank Matrix Estimation , 2010, 1009.2439.

[16]  Harrison H. Zhou,et al.  Optimal rates of convergence for covariance matrix estimation , 2010, 1010.3866.

[17]  A. Tsybakov,et al.  Estimation of high-dimensional low-rank matrices , 2009, 0912.5338.

[18]  Yi Ma,et al.  Robust principal component analysis? , 2009, JACM.

[19]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[20]  Devavrat Shah,et al.  Inferring Rankings Using Constrained Sensing , 2009, IEEE Transactions on Information Theory.

[21]  Benjamin Recht,et al.  A Simpler Approach to Matrix Completion , 2009, J. Mach. Learn. Res..

[22]  Emmanuel J. Candès,et al.  Matrix Completion With Noise , 2009, Proceedings of the IEEE.

[23]  Noureddine El Karoui,et al.  Operator norm consistent estimation of large-dimensional sparse covariance matrices , 2008, 0901.3220.

[24]  Alexandre B. Tsybakov,et al.  Introduction to Nonparametric Estimation , 2008, Springer series in statistics.

[25]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[26]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[27]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[28]  E. Candès,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[29]  Panos M. Pardalos,et al.  On complexity of unconstrained hyperbolic 0-1 programming problems , 2005, Oper. Res. Lett..

[30]  Mark Rudelson,et al.  Sampling from large matrices: An approach through geometric functional analysis , 2005, JACM.

[31]  T. Cai,et al.  Minimax estimation of linear functionals over nonconvex parameter spaces , 2004, math/0406427.

[32]  S. R. Jammalamadaka,et al.  Empirical Processes in M-Estimation , 2001 .

[33]  M. Talagrand Majorizing measures: the generic chaining , 1996 .

[34]  Thomas Bäck,et al.  The zero/one multiple knapsack problem and genetic algorithms , 1994, SAC '94.

[35]  D. Donoho Statistical Estimation and Optimal Recovery , 1994 .

[36]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[37]  Bernard W. Silverman,et al.  Speed of Estimation in Positron Emission Tomography and Related Inverse Problems , 1990 .

[38]  F. O’Sullivan A Statistical Perspective on Ill-posed Inverse Problems , 1986 .

[39]  L. Valiant,et al.  NP is as easy as detecting unique solutions , 1985, STOC '85.

[40]  J. Berge,et al.  Orthogonal procrustes rotation for two or more matrices , 1977 .

[41]  R. Dudley The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian Processes , 1967 .

[42]  Statistics For High Dimensional Data Methods Theory And , 2022 .

[43]  R. Vershynin Lectures in Geometric Functional Analysis , 2012 .

[44]  Mark Rudelson,et al.  Convex bodies with minimal mean width , 2000 .

[45]  Bin Yu Assouad, Fano, and Le Cam , 1997 .

[46]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[47]  D. Pollard Empirical Processes: Theory and Applications , 1990 .

[48]  G. Pisier The volume of convex bodies and Banach space geometry , 1989 .

[49]  Y. Gordon On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .