The alternating descent conditional gradient method for sparse inverse problems

We propose a variant of the classical conditional gradient method (CGM) for sparse inverse problems with differentiable measurement models. Such models arise in many practical problems including superresolution, time-series modeling, and matrix completion. Our algorithm combines nonconvex and convex optimization techniques: we propose global conditional gradient steps alternating with nonconvex local search exploiting the differentiable measurement model. This hybridization gives the theoretical global optimality guarantees and stopping conditions of convex optimization along with the performance and modeling flexibility associated with nonconvex optimization. Our experiments demonstrate that our technique achieves state-of-the-art results in several applications.

[1]  Kenneth O. Kortanek,et al.  Semi-Infinite Programming: Theory, Methods, and Applications , 1993, SIAM Rev..

[2]  Renato D. C. Monteiro,et al.  A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization , 2003, Math. Program..

[3]  Dmitry M. Malioutov,et al.  A sparse signal reconstruction perspective for source localization with sensor arrays , 2005, IEEE Transactions on Signal Processing.

[4]  R. Baraniuk,et al.  Compressive Radar Imaging , 2007, 2007 IEEE Radar Conference.

[5]  Martin Jaggi,et al.  A Simple Algorithm for Nuclear Norm Regularized Problems , 2010, ICML.

[6]  Robert D. Nowak,et al.  Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels , 2010, Proceedings of the IEEE.

[7]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[8]  Zhang Liu,et al.  Subspace system identification via weighted nuclear norm optimization , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[9]  Parikshit Shah,et al.  Linear system identification via atomic norm regularization , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[10]  Yaoliang Yu,et al.  Accelerated Training for Matrix-norm Regularization: A Boosting Approach , 2012, NIPS.

[11]  Lei Zhu,et al.  Faster STORM using compressed sensing , 2012, Nature Methods.

[12]  K. Bredies,et al.  Inverse problems in spaces of measures , 2013 .

[13]  Marco F. Duarte,et al.  Spectral compressive sensing , 2013 .

[14]  Gongguo Tang,et al.  Sparse recovery over continuous dictionaries-just discretize , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[15]  Martin Jaggi,et al.  Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization , 2013, ICML.

[16]  Christopher Ré,et al.  Parallel stochastic gradient algorithms for large-scale matrix completion , 2013, Mathematical Programming Computation.

[17]  Suliana Manley,et al.  Quantitative evaluation of software packages for single-molecule localization microscopy , 2015, Nature Methods.

[18]  Zaïd Harchaoui,et al.  Conditional gradient algorithms for norm-regularized smooth convex optimization , 2013, Math. Program..

[19]  Stephen J. Wright,et al.  Forward–Backward Greedy Algorithms for Atomic Norm Regularization , 2014, IEEE Transactions on Signal Processing.