Composite optimization for robust blind deconvolution

The blind deconvolution problem seeks to recover a pair of vectors from a set of rank one bilinear measurements. We consider a natural nonsmooth formulation of the problem and show that under standard statistical assumptions, its moduli of weak convexity, sharpness, and Lipschitz continuity are all dimension independent. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. We then complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.

[1]  Michael C. Ferris,et al.  A Gauss—Newton method for convex composite optimization , 1995, Math. Program..

[2]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..

[3]  Wen Huang,et al.  Blind Deconvolution by a Steepest Descent Algorithm on a Quotient Manifold , 2017, SIAM J. Imaging Sci..

[4]  Roman Vershynin,et al.  High-Dimensional Probability , 2018 .

[5]  Stephen P. Boyd,et al.  Block splitting for distributed optimization , 2013, Mathematical Programming Computation.

[6]  A. Ioffe Variational Analysis of Regular Mappings , 2017 .

[7]  Stephen J. Wright,et al.  A proximal method for composite minimization , 2008, Mathematical Programming.

[8]  Dmitriy Drusvyatskiy,et al.  Subgradient Methods for Sharp Weakly Convex Functions , 2018, Journal of Optimization Theory and Applications.

[9]  Andrea J. Goldsmith,et al.  Exact and Stable Covariance Estimation From Quadratic Sampling via Convex Programming , 2013, IEEE Transactions on Information Theory.

[10]  A. Lewis,et al.  Error Bounds for Convex Inequality Systems , 1998 .

[11]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[12]  Sunav Choudhary,et al.  Sparse blind deconvolution: What cannot be done , 2014, 2014 IEEE International Symposium on Information Theory.

[13]  Chandler Davis The rotation of eigenvectors by a perturbation , 1963 .

[14]  Jean-Louis Goffin,et al.  On convergence rates of subgradient optimization methods , 1977, Math. Program..

[15]  Yonina C. Eldar,et al.  Phase Retrieval with Application to Optical Imaging: A contemporary overview , 2015, IEEE Signal Processing Magazine.

[16]  Xiaodong Li,et al.  Phase Retrieval via Wirtinger Flow: Theory and Algorithms , 2014, IEEE Transactions on Information Theory.

[17]  J. Borwein,et al.  Techniques of variational analysis , 2005 .

[18]  Feng Ruan,et al.  Solving (most) of a set of quadratic equalities: Composite optimization for robust phase retrieval , 2017, Information and Inference: A Journal of the IMA.

[19]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[20]  R. Rockafellar,et al.  Implicit Functions and Solution Mappings , 2009 .

[21]  Ali Ahmed,et al.  BranchHull: Convex bilinear inversion from the entrywise product of signals with known signs , 2017, Applied and Computational Harmonic Analysis.

[22]  Alan Edelman,et al.  Julia: A Fresh Approach to Numerical Computing , 2014, SIAM Rev..

[23]  Singularities of Semiconcave Functions in Banach Spaces , 1999 .

[24]  Damek Davis,et al.  The nonsmooth landscape of phase retrieval , 2017, IMA Journal of Numerical Analysis.

[25]  Dmitriy Drusvyatskiy,et al.  Efficiency of minimizing compositions of convex functions and smooth maps , 2016, Math. Program..

[26]  Bastian Goldlücke,et al.  Variational Analysis , 2014, Computer Vision, A Reference Guide.

[27]  James V. Burke,et al.  Descent methods for composite nondifferentiable optimization problems , 1985, Math. Program..

[28]  Max Simchowitz,et al.  Low-rank Solutions of Linear Matrix Equations via Procrustes Flow , 2015, ICML.

[29]  Xiaodong Li,et al.  Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization , 2016, Applied and Computational Harmonic Analysis.

[30]  R. Rockafellar Favorable Classes of Lipschitz Continuous Functions in Subgradient Optimization , 1981 .

[31]  Dmitriy Drusvyatskiy,et al.  Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods , 2016, Math. Oper. Res..

[32]  B. Mordukhovich Variational analysis and generalized differentiation , 2006 .

[33]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[34]  Yonina C. Eldar,et al.  Solving Systems of Random Quadratic Equations via Truncated Amplitude Flow , 2016, IEEE Transactions on Information Theory.

[35]  Zhi-Quan Luo,et al.  Guaranteed Matrix Completion via Non-Convex Factorization , 2014, IEEE Transactions on Information Theory.

[36]  Felix Krahmer,et al.  Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems , 2016, SIAM J. Appl. Algebra Geom..

[37]  Yuxin Chen,et al.  Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution , 2017, Found. Comput. Math..

[38]  Justin K. Romberg,et al.  Blind Deconvolution Using Convex Programming , 2012, IEEE Transactions on Information Theory.

[39]  Gábor Lugosi,et al.  Concentration Inequalities - A Nonasymptotic Theory of Independence , 2013, Concentration Inequalities.

[40]  Yanjun Li,et al.  Identifiability in Blind Deconvolution With Subspace or Sparsity Constraints , 2015, IEEE Transactions on Information Theory.

[41]  Ali Ahmed,et al.  A convex program for bilinear inversion of sparse vectors , 2016, NeurIPS.

[42]  E. A. Nurminskii The quasigradient method for the solving of the nonlinear programming problems , 1973 .

[43]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[44]  Yuxin Chen,et al.  Nonconvex Matrix Factorization from Rank-One Measurements , 2019, AISTATS.

[45]  Yu Bai,et al.  Subgradient Descent Learns Orthogonal Dictionaries , 2018, ICLR.

[46]  W. Kahan,et al.  The Rotation of Eigenvectors by a Perturbation. III , 1970 .

[47]  Justin K. Romberg,et al.  An Overview of Low-Rank Matrix Recovery From Incomplete Observations , 2016, IEEE Journal of Selected Topics in Signal Processing.

[48]  Ali Ahmed,et al.  Blind Deconvolutional Phase Retrieval via Convex Programming , 2018, NeurIPS.

[49]  Yuxin Chen,et al.  Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview , 2018, IEEE Transactions on Signal Processing.

[50]  Thomas Strohmer,et al.  Self-calibration and biconvex compressive sensing , 2015, ArXiv.

[51]  R. Rockafellar,et al.  Prox-regular functions in variational analysis , 1996 .

[52]  Emmanuel J. Candès,et al.  Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements , 2011, IEEE Transactions on Information Theory.