Distributed Graph Neural Network Training: A Survey

Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review on the optimization techniques for the distributed execution of GNN training. In this survey, we analyze three major challenges in distributed GNN training that are massive feature communication, the loss of model accuracy and workload imbalance. Then we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories that are GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol. We carefully discuss the techniques in each category. In the end, we summarize existing distributed GNN systems for multi-GPUs, GPU-clusters and CPU-clusters, respectively, and give a discussion about the future direction on distributed GNN training.

[1]  Yanyan Shen,et al.  DUCATI: A Dual-Cache Training System for Graph Neural Networks on Giant Graphs with the GPU , 2023, Proc. ACM Manag. Data.

[2]  Xudong Liao,et al.  Scalable and Efficient Full-Graph GNN Training for Large Graphs , 2023, Proc. ACM Manag. Data.

[3]  Jun Zhao,et al.  Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training , 2023, ArXiv.

[4]  Jingren Zhou,et al.  Legion: Automatically Pushing the Envelope of Multi-GPU System for Billion-Scale GNN Training , 2023, USENIX Annual Technical Conference.

[5]  H. Jacobsen,et al.  The Evolution of Distributed Systems for Graph Neural Networks and Their Origin in Graph Processing and Deep Learning: A Survey , 2023, ACM Comput. Surv..

[6]  C. Leiserson,et al.  Communication-Efficient Graph Neural Networks with Probabilistic Neighborhood Expansion Analysis and Caching , 2023, ArXiv.

[7]  Süreyya Emre Kurt,et al.  Communication Optimization for Distributed Execution of Graph Neural Networks , 2023, IEEE International Parallel and Distributed Processing Symposium.

[8]  G. Karypis,et al.  DSP: Efficient GNN Training with Multiple GPUs , 2023, PPoPP.

[9]  H. Ferhatosmanoğlu,et al.  Scalable Graph Convolutional Network Training on Distributed-Memory Systems , 2022, Proc. VLDB Endow..

[10]  Wen-mei W. Hwu,et al.  Graph Neural Network Training and Data Tiering , 2022, KDD.

[11]  Yuede Ji,et al.  TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU , 2022, HPDC.

[12]  Christopher W. Fletcher,et al.  Graphite: optimizing graph neural networks on CPUs through cooperative software-hardware techniques , 2022, ISCA.

[13]  Taesoo Kim,et al.  DynaGraph: dynamic graph neural networks at scale , 2022, GRADES-NDA@SIGMOD.

[14]  Geoffrey X. Yu,et al.  NeutronStar: Distributed GNN Training with Hybrid Dependency Management , 2022, SIGMOD Conference.

[15]  Zhe Zhang,et al.  EPQuant: A Graph Neural Network compression approach based on product quantization , 2022, Neurocomputing.

[16]  T. Hoefler,et al.  Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis , 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Yu Gu,et al.  EC-Graph: A Distributed Graph Neural Network System with Error-Compensated Compression , 2022, 2022 IEEE 38th International Conference on Data Engineering (ICDE).

[18]  Jingren Zhou,et al.  GNNLab: a factored system for sample-based GNN training over GPUs , 2022, EuroSys.

[19]  Shihui Song,et al.  Rethinking graph data placement for graph neural network training on multiple GPUs , 2022, PPoPP.

[20]  Youjie Li,et al.  BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling , 2022, MLSys.

[21]  Cameron R. Wolfe,et al.  PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication , 2022, ICLR.

[22]  Wentao Zhang,et al.  PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm , 2022, WWW.

[23]  Lei Deng,et al.  Survey on Graph Neural Network Acceleration: An Algorithmic Perspective , 2022, IJCAI.

[24]  Yuedong Yang,et al.  SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning , 2022, IEEE Transactions on Computers.

[25]  G. Karypis,et al.  Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Heterogeneous Graphs , 2021, KDD.

[26]  Dan Li,et al.  BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing , 2021, NSDI.

[27]  Guoyi Zhao,et al.  CM-GCN: A Distributed Framework for Graph Convolutional Networks using Cohesive Mini-batches , 2021, 2021 IEEE International Conference on Big Data (Big Data).

[28]  Boyuan Feng,et al.  QGTC: accelerating quantized graph neural networks via GPU tensor core , 2021, PPoPP.

[29]  Anand Sivasubramaniam,et al.  Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks , 2021, ICLR.

[30]  H. Mostafa Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs , 2021, MLSys.

[31]  Lei Chen,et al.  Cache-based GNN System for Dynamic Graphs , 2021, CIKM.

[32]  Yu Wang,et al.  Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective , 2021, MLSys.

[33]  Imran Razzak,et al.  Distributed Optimization of Graph Convolutional Network using Subgraph Variance , 2021, IEEE transactions on neural networks and learning systems.

[34]  Zibin Zheng,et al.  Megnn: Meta-path extracted graph neural network for heterogeneous graph representation learning , 2021, Knowl. Based Syst..

[35]  Yunxin Liu,et al.  Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs , 2021, IEEE Transactions on Parallel and Distributed Systems.

[36]  Minyi Guo,et al.  Skywalker: Efficient Alias-Method-Based Graph Sampling and Random Walk on GPUs , 2021, 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT).

[37]  Jie Tang,et al.  MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems , 2021, KDD.

[38]  Xiao-Meng Zhang,et al.  Graph Neural Networks and Their Current Applications in Bioinformatics , 2021, Frontiers in Genetics.

[39]  Dejing Dou,et al.  Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity , 2021, KDD.

[40]  Jialin Dong,et al.  Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs , 2021, KDD.

[41]  Katherine A. Yelick,et al.  Distributed-memory parallel algorithms for sparse times tall-skinny-dense matrix multiplication , 2021, ICS.

[42]  Lars Petersson,et al.  Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future , 2021, Sensors.

[43]  Miryung Kim,et al.  Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads , 2021, OSDI.

[44]  Yanfang Ye,et al.  Heterogeneous Graph Structure Learning for Graph Neural Networks , 2021, AAAI.

[45]  Xiang Deng,et al.  Graph-Free Knowledge Distillation for Graph Neural Networks , 2021, IJCAI.

[46]  Lei Deng,et al.  Rubik: A Hierarchical Architecture for Efficient Graph Neural Network Training , 2021, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[47]  Rajgopal Kannan,et al.  Accelerating Large Scale Real-Time GNN Inference using Channel Pruning , 2021, Proc. VLDB Endow..

[48]  Sang-Wook Kim,et al.  An In-Depth Analysis of Distributed Training of Deep Neural Networks , 2021, 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS).

[49]  Wenyuan Yu,et al.  FlexGraph: a flexible and efficient distributed framework for GNN training , 2021, EuroSys.

[50]  James Cheng,et al.  DGCL: an efficient communication library for distributed GNN training , 2021, EuroSys.

[51]  Yongchao Liu,et al.  GraphTheta: A Distributed Graph Neural Network Learning System With Flexible Training Strategy , 2021, ArXiv.

[52]  James Cheng,et al.  Seastar: vertex-centric programming for graph neural networks , 2021, EuroSys.

[53]  Dhiraj D. Kalamkar,et al.  DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks , 2021, SC21: International Conference for High Performance Computing, Networking, Storage and Analysis.

[54]  Fengshan Bai,et al.  Spammer Detection Using Graph-level Classification Model of Graph Neural Network , 2021, 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE).

[55]  Shuiwang Ji,et al.  DIG: A Turnkey Library for Diving into Graph Deep Learning Research , 2021, J. Mach. Learn. Res..

[56]  Wen-mei W. Hwu,et al.  Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture , 2021, Proc. VLDB Endow..

[57]  David R. Kaeli,et al.  GNNMark: A Benchmark Suite to Characterize Graph Neural Network Training on GPUs , 2021, 2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS).

[58]  Shuiwang Ji,et al.  Self-Supervised Learning of Graph Neural Networks: A Unified Review , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[59]  Cameron R. Wolfe,et al.  GIST: Distributed Training for Large-Scale Graph Convolutional Networks , 2021, Journal of Applied and Computational Topology.

[60]  Jidong Zhai,et al.  Understanding and bridging the gaps in current GNN performance optimizations , 2021, PPoPP.

[61]  Weiwei Jiang,et al.  Graph Neural Network for Traffic Forecasting: A Survey , 2021, Expert Syst. Appl..

[62]  Peng Jiang,et al.  Communication-Efficient Sampling for Distributed Training of Graph Convolutional Networks , 2021, ArXiv.

[63]  Stefanos Zafeiriou,et al.  Binary Graph Neural Networks , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[64]  M. Bianchini,et al.  Molecular generative Graph Neural Networks for Drug Discovery , 2020, Neurocomputing.

[65]  Murali Annavaram,et al.  Distributed Training of Graph Convolutional Networks using Subgraph Approximation , 2020, ArXiv.

[66]  Duen Horng Chau,et al.  A Large-Scale Database for Graph Representation Learning , 2020, NeurIPS Datasets and Benchmarks.

[67]  Md. Khaledur Rahman,et al.  FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks , 2020, 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS).

[68]  Shiwen Wu,et al.  Graph Neural Networks in Recommender Systems: A Survey , 2020, ACM Comput. Surv..

[69]  Lei Deng,et al.  fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU , 2020, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD).

[70]  Yunhong Wang,et al.  Bi-GCN: Binary Graph Convolutional Network , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[71]  Yunxin Liu,et al.  PaGraph: Scaling GNN training on large graphs via computation-aware caching , 2020, SoCC.

[72]  G. Karypis,et al.  DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs , 2020, 2020 IEEE/ACM 10th Workshop on Irregular Applications: Architectures and Algorithms (IA3).

[73]  Akshay Jain,et al.  Computing Graph Neural Networks: A Survey from Algorithms to Accelerators , 2020, ACM Comput. Surv..

[74]  Pietro Lio,et al.  Learned Low Precision Graph Neural Networks , 2020, ArXiv.

[75]  Xiaoye S. Li,et al.  C-SAW: A Framework for Graph Sampling and Random Walk on GPUs , 2020, SC20: International Conference for High Performance Computing, Networking, Storage and Analysis.

[76]  Abhinav Jangda,et al.  Accelerating graph sampling for graph machine learning using GPUs , 2020, EuroSys.

[77]  Minjie Wang,et al.  FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems , 2020, SC20: International Conference for High Performance Computing, Networking, Storage and Analysis.

[78]  Nicholas D. Lane,et al.  Degree-Quant: Quantization-Aware Training for Graph Neural Networks , 2020, ICLR.

[79]  Xiangnan He,et al.  A Survey on Large-Scale Machine Learning , 2020, IEEE Transactions on Knowledge and Data Engineering.

[80]  Xu Li,et al.  SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization , 2020, 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI).

[81]  Yanbo Xue,et al.  Distributed Training of Deep Learning Models: A Taxonomic Perspective , 2020, IEEE Transactions on Parallel and Distributed Systems.

[82]  Yu Wang,et al.  GE-SpMM: General-Purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks , 2020, SC20: International Conference for High Performance Computing, Networking, Storage and Analysis.

[83]  Bencheng Yan,et al.  TinyGNN: Learning Efficient Graph Neural Networks , 2020, KDD.

[84]  Xing Xie,et al.  Graph Neural News Recommendation with Unsupervised Preference Disentanglement , 2020, ACL.

[85]  Rana Forsati,et al.  Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks , 2020, KDD.

[86]  Cesare Alippi,et al.  Graph Neural Networks in TensorFlow and Keras with Spektral , 2020, IEEE Comput. Intell. Mag..

[87]  Lei Deng,et al.  GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs , 2020, OSDI.

[88]  Ping Lu,et al.  Application Driven Graph Partitioning , 2020, SIGMOD Conference.

[89]  Lei Chen,et al.  Reliable Data Distillation on Graph Convolutional Network , 2020, SIGMOD Conference.

[90]  K. Yelick,et al.  Reducing Communication in Graph Neural Network Training , 2020, SC20: International Conference for High Performance Computing, Networking, Storage and Analysis.

[91]  J. Leskovec,et al.  Open Graph Benchmark: Datasets for Machine Learning on Graphs , 2020, NeurIPS.

[92]  Yafei Dai,et al.  PCGCN: Partition-Centric Processing for Accelerating Graph Convolutional Network , 2020, 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS).

[93]  Davide Eynard,et al.  SIGN: Scalable Inception Graph Neural Networks , 2020, ArXiv.

[94]  Yufeng Zhang,et al.  Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks , 2020, ACL.

[95]  Xuemin Lin,et al.  Binarized graph neural network , 2020, World Wide Web.

[96]  Enhong Chen,et al.  Graph Convolutional Networks with Markov Random Field Reasoning for Social Spammer Detection , 2020, AAAI.

[97]  Zhangyang Wang,et al.  L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[98]  Alexander Aiken,et al.  Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc , 2020, MLSys.

[99]  Xiangnan He,et al.  LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation , 2020, SIGIR.

[100]  Xiang Zhou,et al.  Imputing single-cell RNA-seq data by combining graph convolution and autoencoder neural networks , 2020, bioRxiv.

[101]  Irwin King,et al.  MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding , 2020, WWW.

[102]  Viktor Prasanna,et al.  GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms , 2019, FPGA.

[103]  Tim Verbelen,et al.  A Survey on Distributed Machine Learning , 2019, ACM Comput. Surv..

[104]  Yanchi Liu,et al.  Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide Parking Availability Prediction , 2019, AAAI.

[105]  Yizhou Sun,et al.  Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks , 2019, NeurIPS.

[106]  Cheng Wang,et al.  GMAN: A Graph Multi-Attention Network for Traffic Prediction , 2019, AAAI.

[107]  Nikhil R. Devanur,et al.  PipeDream: generalized pipeline parallelism for DNN training , 2019, SOSP.

[108]  Xiaosong Ma,et al.  KnightKing: a fast distributed graph random walk engine , 2019, SOSP.

[109]  Oreste Villa,et al.  NVBit: A Dynamic Binary Instrumentation Framework for NVIDIA GPUs , 2019, MICRO.

[110]  Cody A. Coleman,et al.  MLPerf Training Benchmark , 2019, MLSys.

[111]  M. Zhou,et al.  Reasoning Over Semantic-Level Graph for Fact Checking , 2019, ACL.

[112]  Alex Smola,et al.  Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs , 2019, ArXiv.

[113]  E. Xing,et al.  Learning Sparse Nonparametric DAGs , 2019, AISTATS.

[114]  Nitesh V. Chawla,et al.  Heterogeneous Graph Neural Network , 2019, KDD.

[115]  Rajgopal Kannan,et al.  GraphSAINT: Graph Sampling Based Inductive Learning Method , 2019, ICLR.

[116]  Yafei Dai,et al.  NeuGraph: Parallel Deep Neural Network Computation on Large Graphs , 2019, USENIX ATC.

[117]  Michael W. Dusenberry,et al.  Learning the Graphical Structure of Electronic Health Records with Graph Convolutional Transformer , 2019, AAAI.

[118]  Dina Katabi,et al.  Circuit-GNN: Graph Neural Networks for Distributed Circuit Design , 2019, ICML.

[119]  Samy Bengio,et al.  Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks , 2019, KDD.

[120]  Xia Hu,et al.  Deep Representation Learning for Social Network Analysis , 2019, Front. Big Data.

[121]  Yanfang Ye,et al.  Heterogeneous Graph Attention Network , 2019, WWW.

[122]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[123]  Chang Zhou,et al.  AliGraph: A Comprehensive Graph Neural Network Platform , 2019, Proc. VLDB Endow..

[124]  Kilian Q. Weinberger,et al.  Simplifying Graph Convolutional Networks , 2019, ICML.

[125]  Yuan He,et al.  Graph Neural Networks for Social Recommendation , 2019, WWW.

[126]  Victor Lee,et al.  TigerGraph: A Native MPP Graph Database , 2019, ArXiv.

[127]  Svetha Venkatesh,et al.  Graph Transformation Policy Network for Chemical Reaction Prediction , 2018, KDD.

[128]  Zhiyuan Liu,et al.  Graph Neural Networks: A Review of Methods and Applications , 2018, AI Open.

[129]  Wenwu Zhu,et al.  Deep Learning on Graphs: A Survey , 2018, IEEE Transactions on Knowledge and Data Engineering.

[130]  Quoc V. Le,et al.  GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism , 2018, NeurIPS.

[131]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[132]  Junzhou Huang,et al.  Adaptive Sampling Towards Fast Graph Representation Learning , 2018, NeurIPS.

[133]  Amar Phanishayee,et al.  Benchmarking and Analyzing Deep Neural Network Training , 2018, 2018 IEEE International Symposium on Workload Characterization (IISWC).

[134]  Alexander Aiken,et al.  Beyond Data and Model Parallelism for Deep Neural Networks , 2018, SysML.

[135]  Raia Hadsell,et al.  Graph networks as learnable physics engines for inference and control , 2018, ICML.

[136]  Razvan Pascanu,et al.  Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.

[137]  Cao Xiao,et al.  FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling , 2018, ICLR.

[138]  Alex Fout,et al.  Protein Interface Prediction using Graph Convolutional Networks , 2017, NIPS.

[139]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[140]  Le Song,et al.  Stochastic Training of Graph Convolutional Networks with Variance Reduction , 2017, ICML.

[141]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[142]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[143]  David R. Kaeli,et al.  DNNMark: A Deep Neural Network Benchmark Suite for GPUs , 2017, GPGPU@PPoPP.

[144]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[145]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[146]  Samy Bengio,et al.  Revisiting Distributed Synchronous SGD , 2016, ArXiv.

[147]  Zi Huang,et al.  Heterogeneous Environment Aware Streaming Graph Partitioning , 2015, IEEE Transactions on Knowledge and Data Engineering.

[148]  Zhihua Zhang,et al.  Distributed Power-law Graph Computing: Theoretical and Empirical Analysis , 2014, NIPS.

[149]  Sivasankaran Rajamanickam,et al.  Scalable matrix computations on large scale-free graphs using 2D graph partitioning , 2013, 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[150]  Gabriel Kliot,et al.  Streaming graph partitioning for large distributed graphs , 2012, KDD.

[151]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[152]  Aart J. C. Bik,et al.  Pregel: a system for large-scale graph processing , 2010, SIGMOD Conference.

[153]  Ziv Bar-Yossef,et al.  Local approximation of PageRank and reverse PageRank , 2008, SIGIR '08.

[154]  Ümit V. Çatalyürek,et al.  Hypergraph-Partitioning-Based Decomposition for Parallel Sparse-Matrix Vector Multiplication , 1999, IEEE Trans. Parallel Distributed Syst..

[155]  Vipin Kumar,et al.  A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs , 1998, SIAM J. Sci. Comput..

[156]  Yoshua Bengio,et al.  Benchmarking Graph Neural Networks , 2023, J. Mach. Learn. Res..

[157]  Li,et al.  ByteGNN: Efficient Graph Neural Network Training at Large Scale , 2022, Proc. VLDB Endow..

[158]  Jiannong Cao,et al.  SANCUS: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks , 2022, Proc. VLDB Endow..

[159]  Minjie Wang,et al.  Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph , 2022, MLSys.

[160]  Zheng Chai,et al.  Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization , 2022, ArXiv.

[161]  Fan Yang,et al.  EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression , 2022, ICLR.

[162]  Anand Padmanabha Iyer,et al.  P3: Distributed Deep Graph Learning at Scale , 2021, OSDI.

[163]  Chang Zhou,et al.  CogDL: An Extensive Toolkit for Deep Learning on Graphs , 2021, ArXiv.

[164]  Xuechao Wei,et al.  GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing , 2021, ArXiv.

[165]  Yuanqi Du,et al.  GraphGT: Machine Learning Datasets for Graph Generation and Transformation , 2021, NeurIPS Datasets and Benchmarks.

[166]  Graph Neural Networks in Recommender Systems: A Survey , 2021 .

[167]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[168]  Kunle Olukotun,et al.  DAWNBench : An End-to-End Deep Learning Benchmark and Competition , 2017 .

[169]  Carlos Guestrin,et al.  PowerGraph : Distributed Graph-Parallel Computation on Natural Graphs , 2012 .

[170]  Carlos Guestrin,et al.  Distributed GraphLab : A Framework for Machine Learning and Data Mining in the Cloud , 2012 .