Learning Scale Free Network by Node Specific Degree Prior

Learning the network structure underlying data is an important problem in machine learning. This paper introduces a novel prior to study the inference of scale-free networks, which are widely used to model social and biological networks. The prior not only favors a desirable global node degree distribution, but also takes into consideration the relative strength of all the possible edges adjacent to the same node and the estimated degree of each individual node. To fulfill this, ranking is incorporated into the prior, which makes the problem challenging to solve. We employ an ADMM (alternating direction method of multipliers) framework to solve the Gaussian Graphical model regularized by this prior. . Our experiments on both synthetic and real data show that our prior not only yields a scale-free network, but also produces many more correctly predicted edges than the others such as the scale-free inducing prior, the hub-inducing prior and the l1 norm.

[1]  Su-In Lee,et al.  Learning graphical models with hubs , 2014, J. Mach. Learn. Res..

[2]  N. Meinshausen,et al.  High-dimensional graphs and variable selection with the Lasso , 2006, math/0608017.

[3]  A. Barabasi,et al.  The human disease network , 2007, Proceedings of the National Academy of Sciences.

[4]  Su-In Lee,et al.  Node-based learning of multiple Gaussian graphical models , 2013, J. Mach. Learn. Res..

[5]  M. Yuan,et al.  Model selection and estimation in the Gaussian graphical model , 2007 .

[6]  M. DePamphilis,et al.  HUMAN DISEASE , 1957, The Ulster Medical Journal.

[7]  Andrew P. Hodges,et al.  Bayesian Network Expansion Identifies New ROS and Biofilm Regulators , 2010, PloS one.

[8]  Qiang Liu,et al.  Learning Scale Free Networks by Reweighted L1 regularization , 2011, AISTATS.

[9]  Trevor Hastie,et al.  Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.

[10]  Diogo M. Camacho,et al.  Wisdom of crowds for robust gene network inference , 2012, Nature Methods.

[11]  S. Horvath,et al.  Statistical Applications in Genetics and Molecular Biology , 2011 .

[12]  Stephen P. Boyd,et al.  Enhancing Sparsity by Reweighted ℓ1 Minimization , 2007, 0711.1612.

[13]  B. Schölkopf,et al.  High-Dimensional Graphical Model Selection Using ℓ1-Regularized Logistic Regression , 2007 .

[14]  R. Tibshirani,et al.  A note on the group lasso and a sparse group lasso , 2010, 1001.0736.

[15]  R. Tibshirani,et al.  Sparse inverse covariance estimation with the graphical lasso. , 2008, Biostatistics.

[16]  Alexandre d'Aspremont,et al.  Model Selection Through Sparse Max Likelihood Estimation Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data , 2022 .

[17]  A. Danchin,et al.  GadE (YhiE): a novel activator involved in the response to acid environment in Escherichia coli. , 2004, Microbiology.

[18]  P. Bühlmann,et al.  The group lasso for logistic regression , 2008 .

[19]  Tibério S. Caetano,et al.  A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation , 2012, NIPS.

[20]  Pei Wang,et al.  Partial Correlation Estimation by Joint Sparse Regression Models , 2008, Journal of the American Statistical Association.