Renegar's Condition Number and Compressed Sensing Performance

Renegar's condition number is a data-driven computational complexity measure for convex programs, generalizing classical condition numbers in linear systems. We provide evidence that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted eigenvalue of the observation matrix, which controls compressed sensing performance. This means that, in these problems, a single parameter directly controls computational complexity and recovery performance.

[1]  Bamdev Mishra,et al.  Manopt, a matlab toolbox for optimization on manifolds , 2013, J. Mach. Learn. Res..

[2]  James Renegar,et al.  Incorporating Condition Measures into the Complexity Theory of Linear Programming , 1995, SIAM J. Optim..

[3]  J. Renegar Some perturbation theory for linear programming , 1994, Math. Program..

[4]  R. DeVore,et al.  Compressed sensing and best k-term approximation , 2008 .

[5]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[6]  Yurii Nesterov,et al.  Smooth minimization of non-smooth functions , 2005, Math. Program..

[7]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[8]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[9]  James Renegar,et al.  A mathematical view of interior-point methods in convex optimization , 2001, MPS-SIAM series on optimization.

[10]  Santosh S. Vempala,et al.  An Efficient Re-Scaled Perceptron Algorithm for Conic Systems , 2006, Math. Oper. Res..

[11]  Emmanuel J. Candès,et al.  Templates for convex cone problems with applications to sparse signal recovery , 2010, Math. Program. Comput..

[12]  Robert M. Freund,et al.  A geometric analysis of Renegar’s condition number, and its interplay with conic curvature , 2007, Math. Program..

[13]  Andrea Montanari,et al.  Cone-Constrained Principal Component Analysis , 2014, NIPS.

[14]  V. Temlyakov,et al.  A remark on Compressed Sensing , 2007 .

[15]  D. Donoho,et al.  Sparse nonnegative solution of underdetermined linear equations by linear programming. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[16]  Javier Peña,et al.  Understanding the Geometry of Infeasible Perturbations of a Conic Linear System , 1999, SIAM J. Optim..

[17]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[18]  Stephen P. Boyd,et al.  Monotonicity and restart in fast gradient methods , 2014, 53rd IEEE Conference on Decision and Control.

[19]  James Renegar,et al.  Linear programming, complexity theory and elementary functional analysis , 1995, Math. Program..

[20]  Stephen P. Boyd,et al.  A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights , 2014, J. Mach. Learn. Res..

[21]  Robert M. Freund,et al.  Condition-Based Complexity of Convex Optimization in Conic Linear Form via the Ellipsoid Algorithm , 1999, SIAM J. Optim..

[22]  Javier Peña,et al.  A primal-dual symmetric relaxation for homogeneous conic systems , 2007, J. Complex..

[23]  Justin K. Romberg,et al.  Sparse Recovery of Streaming Signals Using $\ell_1$-Homotopy , 2013, IEEE Transactions on Signal Processing.

[24]  Michael I. Jordan,et al.  Computational and statistical tradeoffs via convex relaxation , 2012, Proceedings of the National Academy of Sciences.

[25]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[26]  Robert M. Freund,et al.  On the Complexity of Computing Estimates of Condition Measures of a Conic Linear System , 2003, Math. Oper. Res..

[27]  Arkadi Nemirovski,et al.  On unified view of nullspace-type conditions for recoveries associated with general sparsity structures , 2012, ArXiv.

[28]  Yaakov Tsaig,et al.  Fast Solution of $\ell _{1}$ -Norm Minimization Problems When the Solution May Be Sparse , 2008, IEEE Transactions on Information Theory.

[29]  Pradeep Ravikumar,et al.  Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings , 2014, NIPS.

[30]  Robert M. Freund,et al.  Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system , 2000, Math. Program..

[31]  Robert M. Freund,et al.  Some characterizations and properties of the “distance to ill-posedness” and the condition measure of a conic linear system , 1999, Math. Program..

[32]  Fernando Ordóñez,et al.  Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization , 2003, SIAM J. Optim..

[33]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[34]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[35]  Yurii Nesterov,et al.  Generalized Power Method for Sparse Principal Component Analysis , 2008, J. Mach. Learn. Res..

[36]  Dennis Amelunxen,et al.  Gordon's inequality and condition numbers in conic optimization , 2014, 1408.3016.