Towards Graph Pooling by Edge Contraction

Graph Neural Networks (GNNs) research has concentrated on improving convolutional layers, with little attention paid to developing graph pooling layers. Yet pooling layers can enable GNNs to reason over abstracted groups of nodes instead of single nodes, thus increasing their generalization potential. To close this gap, we propose a graph pooling layer relying on the notion of edge contraction: EdgePool learns a localized and sparse pooling transform. We evaluate it on four datasets, finding that it increases performance on the three largest. We also show that EdgePool can be integrated in existing GNN architectures without adding any additional losses or regularization.