A Universal Primal-Dual Convex Optimization Framework

We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimization template. The algorithmic instances of our framework are universal since they can automatically adapt to the unknown Holder continuity degree and constant within the dual formulation. They are also guaranteed to have optimal convergence rates in the objective residual and the feasibility gap for each Holder smoothness degree. In contrast to existing primal-dual algorithms, our framework avoids the proximity operator of the objective function. We instead leverage computationally cheaper, Fenchel-type operators, which are the main workhorses of the generalized conditional gradient (GCG)-type methods. In contrast to the GCG-type methods, our framework does not require the objective function to be differentiable, and can also process additional general linear inclusion constraints, while guarantees the convergence rate on the primal problem.

[1]  Yaoliang Yu Fast gradient algorithms for structured sparsity , 2014 .

[2]  Xiaoming Yuan,et al.  Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems , 2013, 1305.0546.

[3]  Volkan Cevher,et al.  Constrained convex minimization via model-based excessive gap , 2014, NIPS.

[4]  Antonin Chambolle,et al.  A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging , 2011, Journal of Mathematical Imaging and Vision.

[5]  D. Balding,et al.  Structured Regularizers for High-Dimensional Problems : Statistical and Computational Issues , 2014 .

[6]  Arkadi Nemirovski,et al.  Solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles , 2013, Math. Program..

[7]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[8]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[9]  Marc Teboulle,et al.  Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization , 2014, SIAM J. Optim..

[10]  Yurii Nesterov,et al.  Universal gradient methods for convex optimization problems , 2015, Math. Program..

[11]  Yurii Nesterov,et al.  Smooth minimization of non-smooth functions , 2005, Math. Program..

[12]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[13]  P. L. Combettes,et al.  A proximal decomposition method for solving convex variational inverse problems , 2008, 0807.2617.

[14]  Martin Jaggi,et al.  Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization , 2013, ICML.

[15]  Volkan Cevher,et al.  Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics , 2014, IEEE Signal Processing Magazine.

[16]  Yurii Nesterov,et al.  Primal-dual subgradient methods for convex problems , 2005, Math. Program..

[17]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[18]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[19]  Marc Teboulle,et al.  Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems , 2009, IEEE Transactions on Image Processing.

[20]  Martin Jaggi,et al.  A Simple Algorithm for Nuclear Norm Regularized Problems , 2010, ICML.

[21]  Stephen Becker,et al.  Quantum state tomography via compressed sensing. , 2009, Physical review letters.

[22]  Renato D. C. Monteiro,et al.  Iteration-complexity of first-order augmented Lagrangian methods for convex programming , 2015, Mathematical Programming.