G3

This paper demonstrates G, a programming framework for Graph Neural Network (GNN) training, tailored from Graph processing systems onGraphics processing units (GPUs). G aims at improving the efficiency of GNN training by supporting graph-structured operations using parallel graph processing systems. G enables users to leverage the massive parallelism and other architectural features of GPUs in the following two ways: building GNN layers by writing sequential C/C++ code with a set of flexible APIs (Application Programming Interfaces); creating GNN models with essential GNN operations and layers provided in G. The runtime system of G automatically executes the user-defined GNNs on the GPU, with a series of graph-centric optimizations enabled. We demonstrate the steps of developing some common GNN structures with G, and the superior performance of G against existing GNN training systems, i.e., PyTorch and TensorFlow. PVLDB Reference Format: Husong Liu, Shengliang Lu, Xinyu Chen, and Bingsheng He. G3: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs. PVLDB, 12(xxx): xxxx-yyyy, 2019. DOI: https://doi.org/10.14778/xxxxxxx.xxxxxxx

[1]  Kian-Lee Tan,et al.  GPU-based Graph Traversal on Compressed Graphs , 2019, SIGMOD Conference.

[2]  Aart J. C. Bik,et al.  Pregel: a system for large-scale graph processing , 2010, SIGMOD Conference.

[3]  Kilian Q. Weinberger,et al.  Simplifying Graph Convolutional Networks , 2019, ICML.

[4]  Yafei Dai,et al.  NeuGraph: Parallel Deep Neural Network Computation on Large Graphs , 2019, USENIX ATC.

[5]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[6]  Wei Li,et al.  Tux2: Distributed Graph Computation for Machine Learning , 2017, NSDI.

[7]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[8]  H. Howie Huang,et al.  Enterprise: breadth-first graph traversal on GPUs , 2015, SC15: International Conference for High Performance Computing, Networking, Storage and Analysis.

[9]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[10]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[11]  Richard S. Zemel,et al.  Gated Graph Sequence Neural Networks , 2015, ICLR.

[12]  Zhiyuan Liu,et al.  Graph Neural Networks: A Review of Methods and Applications , 2018, AI Open.

[13]  Jianlong Zhong,et al.  Medusa: Simplified Graph Processing on GPUs , 2014, IEEE Transactions on Parallel and Distributed Systems.

[14]  Hai Jin,et al.  DiGraph: An Efficient Path-based Iterative Directed Graph Processing System on Multiple GPUs , 2019, ASPLOS.

[15]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[16]  Keval Vora,et al.  CuSha: vertex-centric graph processing on GPUs , 2014, HPDC '14.

[17]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[18]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[19]  Alexander J. Smola,et al.  Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs , 2019, ArXiv.

[20]  John D. Owens,et al.  Gunrock: a high-performance graph processing library on the GPU , 2015, PPoPP.

[21]  Bingsheng He,et al.  Accelerating Dynamic Graph Analytics on GPUs , 2017, Proc. VLDB Endow..