Phase diagram of matrix compressed sensing

In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016)1932-455310.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.

[1]  Toshiyuki Tanaka,et al.  Low-rank matrix reconstruction and clustering via approximate message passing , 2013, NIPS.

[2]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[3]  Florent Krzakala,et al.  Statistical physics-based reconstruction in compressed sensing , 2011, ArXiv.

[4]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[5]  Yoshiyuki Kabashima,et al.  An integral formula for large random rectangular matrices and its application to analysis of linear vector channels , 2008, 2008 6th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks and Workshops.

[6]  Philip Schniter,et al.  Parametric Bilinear Generalized Approximate Message Passing , 2015, IEEE Journal of Selected Topics in Signal Processing.

[7]  Florent Krzakala,et al.  Phase diagram and approximate message passing for blind calibration and dictionary learning , 2013, 2013 IEEE International Symposium on Information Theory.

[8]  Sundeep Rangan,et al.  Generalized approximate message passing for estimation with random linear mixing , 2010, 2011 IEEE International Symposium on Information Theory Proceedings.

[9]  西森 秀稔 Statistical physics of spin glasses and information processing : an introduction , 2001 .

[10]  Sundeep Rangan,et al.  Adaptive damping and mean removal for the generalized approximate message passing algorithm , 2014, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  Volkan Cevher,et al.  Bilinear Generalized Approximate Message Passing—Part I: Derivation , 2013, IEEE Transactions on Signal Processing.

[12]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[13]  Florent Krzakala,et al.  Phase Transitions and Sample Complexity in Bayes-Optimal Matrix Factorization , 2014, IEEE Transactions on Information Theory.

[14]  Y. Kabashima A CDMA multiuser detection algorithm on the basis of belief propagation , 2003 .

[15]  Volkan Cevher,et al.  Bilinear Generalized Approximate Message Passing—Part II: Applications , 2014, IEEE Transactions on Signal Processing.

[16]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[17]  Florent Krzakala,et al.  Phase transitions in sparse PCA , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[18]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[19]  Guido Sanguinetti,et al.  Advances in Neural Information Processing Systems 24 , 2011 .

[20]  John D. Lafferty,et al.  A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements , 2015, NIPS.

[21]  Ayaka Sakata,et al.  Sample complexity of Bayesian optimal dictionary learning , 2013, 2013 IEEE International Symposium on Information Theory.

[22]  Andrea Montanari,et al.  Message-passing algorithms for compressed sensing , 2009, Proceedings of the National Academy of Sciences.

[23]  C. Ross Found , 1869, The Dental register.

[24]  Yoram Bresler,et al.  Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization , 2013, ArXiv.

[25]  Florent Krzakala,et al.  On convergence of approximate message passing , 2014, 2014 IEEE International Symposium on Information Theory.

[26]  R. Palmer,et al.  Solution of 'Solvable model of a spin glass' , 1977 .

[27]  M. Mézard,et al.  Spin Glass Theory and Beyond , 1987 .

[28]  Peter A. Flach,et al.  Advances in Neural Information Processing Systems 28 , 2015 .

[29]  Florent Krzakala,et al.  Statistical physics of inference: thresholds and algorithms , 2015, ArXiv.

[30]  S. Kak Information, physics, and computation , 1996 .

[31]  Prateek Jain,et al.  Low-rank matrix completion using alternating minimization , 2012, STOC '13.

[32]  Andrea Montanari,et al.  The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising , 2013, Proceedings of the National Academy of Sciences.

[33]  Erwin Riegler,et al.  Information-theoretic limits of matrix completion , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).