The Graph Convolutional Network (GCN) model and its variants are powerful graph embedding tools for facilitating classification and clustering on graphs. However, a major challenge is to reduce the complexity of layered GCNs and make them parallelizable and scalable on very large graphs — state-of the art techniques are unable to achieve scalability without losing accuracy and efficiency. In this paper, we propose novel parallelization techniques for graph sampling-based GCNs that achieve superior scalable performance on very large graphs without compromising accuracy. Specifically, our GCN guarantees work-efficient training and produces order of magnitude savings in computation and communication. To scale GCN training on tightly-coupled shared memory systems, we develop parallelization strategies for the key steps in training: For the graph sampling step, we exploit parallelism within and across multiple sampling instances, and devise an efficient data structure for concurrent accesses that provides theoretical guarantee of near-linear speedup with number of processing units. For the feature propagation step within the sampled graph, we improve cache utilization and reduce DRAM communication by data partitioning. We prove that our partitioning strategy is a 2-approximation for minimizing the communication time compared to the optimal strategy. We demonstrate that our parallel graph embedding outperforms state-of-the-art methods in scalability (with respect to number of processors, graph size and GCN model size), efficiency and accuracy on several large datasets. On a 40-core Xeon platform, our parallel training achieves 64x speedup (with AVX) in the sampling step and 25x speedup in the feature propagation step, compared to the serial implementation, resulting in a net speedup of 21x. Our scalable algorithm enables deeper GCN, as demonstrated by 1306x speedup on a 3-layer GCN compared to Tensorflow implementation of state-of-the-art.
[1]
Jure Leskovec,et al.
Inductive Representation Learning on Large Graphs
,
2017,
NIPS.
[2]
Weimin Zheng,et al.
Exploring the Hidden Dimension in Graph Processing
,
2016,
OSDI.
[3]
Cao Xiao,et al.
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
,
2018,
ICLR.
[4]
Jeffrey Dean,et al.
Distributed Representations of Words and Phrases and their Compositionality
,
2013,
NIPS.
[5]
Willy Zwaenepoel,et al.
X-Stream: edge-centric graph processing using streaming partitions
,
2013,
SOSP.
[6]
Max Welling,et al.
Semi-Supervised Classification with Graph Convolutional Networks
,
2016,
ICLR.
[7]
Viktor K. Prasanna,et al.
Accelerating PageRank using Partition-Centric Processing
,
2017,
USENIX Annual Technical Conference.
[8]
Zhengyang Wang,et al.
Large-Scale Learnable Graph Convolutional Networks
,
2018,
KDD.
[9]
Jorge Nocedal,et al.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
,
2016,
ICLR.
[10]
Matus Telgarsky,et al.
Benefits of Depth in Neural Networks
,
2016,
COLT.
[11]
David A. Patterson,et al.
Reducing Pagerank Communication via Propagation Blocking
,
2017,
2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS).
[12]
Donald F. Towsley,et al.
Estimating and sampling graphs with multidimensional random walks
,
2010,
IMC '10.