Node Dependent Local Smoothing for Scalable Graph Learning

Recent works reveal that feature or label smoothing lies at the core of Graph Neural Networks (GNNs). Concretely, they show feature smoothing combined with simple linear regression achieves comparable performance with the carefully designed GNNs, and a simple MLP model with label smoothing of its prediction can outperform the vanilla GCN. Though an interesting finding, smoothing has not been well understood, especially regarding how to control the extent of smoothness. Intuitively, too small or too large smoothing iterations may cause under-smoothing or over-smoothing and can lead to sub-optimal performance. Moreover, the extent of smoothness is node-specific, depending on its degree and local structure. To this end, we propose a novel algorithm called node-dependent local smoothing (NDLS), which aims to control the smoothness of every node by setting a nodespecific smoothing iteration. Specifically, NDLS computes influence scores based on the adjacency matrix and selects the iteration number by setting a threshold on the scores. Once selected, the iteration number can be applied to both feature smoothing and label smoothing. Experimental results demonstrate that NDLS enjoys high accuracy – state-of-the-art performance on node classifications tasks, flexibility – can be incorporated with any models, scalability and efficiency – can support large scale graphs with fast training.

[1]  Xiao-Ming Wu,et al.  Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning , 2018, AAAI.

[2]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[3]  Le Song,et al.  Stochastic Training of Graph Convolutional Networks with Variance Reduction , 2017, ICML.

[4]  Stephan Günnemann,et al.  Predict then Propagate: Graph Neural Networks meet Personalized PageRank , 2018, ICLR.

[5]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[6]  Lei Chen,et al.  Lasagne: A Multi-Layer Graph Convolutional Network Framework via Node-aware Deep Architecture (Extended Abstract) , 2022, 2022 IEEE 38th International Conference on Data Engineering (ICDE).

[7]  Simone Scardapane,et al.  Adaptive Propagation Graph Convolutional Network , 2021, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Jie Zhou,et al.  Adaptive Graph Encoder for Attributed Graph Embedding , 2020, KDD.

[9]  Jure Leskovec,et al.  OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs , 2021, NeurIPS Datasets and Benchmarks.

[10]  Jiawei Jiang,et al.  OpenBox: A Generalized Black-box Optimization Service , 2021, KDD.

[11]  Xiaotong Zhang,et al.  Attributed Graph Clustering via Adaptive Graph Convolution , 2019, IJCAI.

[12]  Johannes Klicpera,et al.  Scaling Graph Neural Networks with Approximate PageRank , 2020, KDD.

[13]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[14]  Cao Xiao,et al.  FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling , 2018, ICLR.

[15]  W. Zachary,et al.  An Information Flow Model for Conflict and Fission in Small Groups , 1977, Journal of Anthropological Research.

[16]  Bin Cui,et al.  DeGNN: Improving Graph Neural Networks with Graph Decomposition , 2021, KDD.

[17]  Junzhou Huang,et al.  Adaptive Sampling Towards Fast Graph Representation Learning , 2018, NeurIPS.

[18]  Kilian Q. Weinberger,et al.  Simplifying Graph Convolutional Networks , 2019, ICML.

[19]  Kaigui Bian,et al.  GARG: Anonymous Recommendation of Point-of-Interest in Mobile Networks by Graph Convolution Network , 2020, Data Science and Engineering.

[20]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[21]  Rajgopal Kannan,et al.  GraphSAINT: Graph Sampling Based Inductive Learning Method , 2019, ICLR.

[22]  Davide Eynard,et al.  SIGN: Scalable Inception Graph Neural Networks , 2020, ArXiv.

[23]  Ken-ichi Kawarabayashi,et al.  Representation Learning on Graphs with Jumping Knowledge Networks , 2018, ICML.

[24]  Xupeng Miao,et al.  ROD: Reception-aware Online Distillation for Sparse Graphs , 2021, KDD.

[25]  Yikuan Xia,et al.  Evaluating Deep Graph Neural Networks , 2021, ArXiv.

[26]  Shuiwang Ji,et al.  Towards Deeper Graph Neural Networks , 2020, KDD.

[27]  Yuxiao Dong,et al.  Microsoft Academic Graph: When experts are not enough , 2020, Quantitative Science Studies.

[28]  Shiwen Wu,et al.  Graph Neural Networks in Recommender Systems: A Survey , 2020, ArXiv.

[29]  Lei Chen,et al.  Reliable Data Distillation on Graph Convolutional Network , 2020, SIGMOD Conference.

[30]  Piotr Koniusz,et al.  Simple Spectral Graph Convolution , 2021, ICLR.

[31]  Yongdong Zhang,et al.  LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation , 2020, SIGIR.

[32]  Shuiwang Ji,et al.  A Multi-Scale Approach for Graph Link Prediction , 2020, AAAI.

[33]  Andy Liaw,et al.  Classification and Regression by randomForest , 2007 .

[34]  Xia Hu,et al.  Policy-GNN: Aggregation Optimization for Graph Neural Networks , 2020, KDD.

[35]  Samy Bengio,et al.  Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks , 2019, KDD.

[36]  Aric Hagberg,et al.  Exploring Network Structure, Dynamics, and Function using NetworkX , 2008, Proceedings of the Python in Science Conference.

[37]  Xipeng Qiu,et al.  Syntax-guided text generation via graph neural network , 2021, Science China Information Sciences.