A comment on the "A unified Bayesian inference framework for generalized linear models"

The recent work `A unified Bayesian inference framework for generalized linear models' \cite{meng1} shows that the GLM can be solved via iterating between the standard linear module (SLM) (running with standard Bayesian algorithm) and the minimum mean squared error (MMSE) module. The proposed framework utilizes expectation propagation and corresponds to the sum-product version \cite{Rangan1}. While in \cite{Rangan1}, a max-sum GAMP is also proposed. What is their intrinsic relationship? This comment aims to answer this.

[1]  Jianhua Lu,et al.  An Expectation Propagation Perspective on Approximate Message Passing , 2015, IEEE Signal Processing Letters.

[2]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[3]  Sundeep Rangan,et al.  Generalized approximate message passing for estimation with random linear mixing , 2010, 2011 IEEE International Symposium on Information Theory Proceedings.

[4]  Tom Minka,et al.  A family of algorithms for approximate Bayesian inference , 2001 .

[5]  Sheng Wu,et al.  A Unified Bayesian Inference Framework for Generalized Linear Models , 2017, IEEE Signal Processing Letters.