IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification

Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the 'node-orderless' property. Normally, adjacency matrices will cast an artificial and random node-order on the graphs, which renders the performance of deep models on graph classification tasks extremely erratic, and the representations learned by such models lack clear interpretability. To eliminate the unnecessary node-order constraint, we propose a novel model named Isomorphic Neural Network (IsoNN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. IsoNN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. Three fully-connected layers are used as the classification component in IsoNN. Extensive experiments are conducted on benchmark datasets, the experimental results can demonstrate the effectiveness of ISONN, especially compared with both classic and state-of-the-art graph classification methods.

[1]  Qiang Gao,et al.  Identifying Human Mobility via Trajectory Embeddings , 2017, IJCAI.

[2]  Alexander A. Alemi,et al.  Watch Your Step: Learning Node Embeddings via Graph Attention , 2017, NeurIPS.

[3]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[4]  Yixin Chen,et al.  An End-to-End Deep Learning Architecture for Graph Classification , 2018, AAAI.

[5]  Philip S. Yu,et al.  Near-optimal Supervised Feature Selection among Frequent Subgraphs , 2009, SDM.

[6]  Jonathan Masci,et al.  Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[8]  Zhiyuan Liu,et al.  Learning Entity and Relation Embeddings for Knowledge Graph Completion , 2015, AAAI.

[9]  Jiawei Han,et al.  gSpan: graph-based substructure pattern mining , 2002, 2002 IEEE International Conference on Data Mining, 2002. Proceedings..

[10]  Karsten M. Borgwardt,et al.  Fast subtree kernels on graphs , 2009, NIPS.

[11]  Rajshekhar Sunderraman,et al.  An iterative MapReduce approach to frequent subgraph mining in biological datasets , 2012, BCB.

[12]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[13]  Wei Wang,et al.  Graph classification based on pattern co-occurrence , 2009, CIKM.

[14]  Zhihua Cai,et al.  Boosting for Multi-Graph Classification , 2015, IEEE Transactions on Cybernetics.

[15]  Yang Liu,et al.  graph2vec: Learning Distributed Representations of Graphs , 2017, ArXiv.

[16]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[17]  Philip S. Yu,et al.  Semi-supervised feature selection for graph classification , 2010, KDD.

[18]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[19]  Yuji Matsumoto,et al.  An Application of Boosting to Graph Classification , 2004, NIPS.

[20]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[21]  Shou-De Lin,et al.  PRUNE: Preserving Proximity and Global Ranking for Network Embedding , 2017, NIPS.

[22]  Philip S. Yu,et al.  Discriminative Feature Selection for Uncertain Graph Classification , 2013, SDM.

[23]  Ryan A. Rossi,et al.  Graph Classification using Structural Attention , 2018, KDD.

[24]  Liang Chen,et al.  Nonlinear Graph Fusion for Multi-modal Classification of Alzheimer's Disease , 2015, MLMI.

[25]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[26]  Xiaokui Xiao,et al.  Large-scale frequent subgraph mining in MapReduce , 2014, 2014 IEEE 30th International Conference on Data Engineering.

[27]  Philip S. Yu,et al.  Identification of Discriminative Subgraph Patterns in fMRI Brain Networks in Bipolar Affective Disorder , 2015, BIH.

[28]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[29]  Hui Wang,et al.  A Parallel Approach for Frequent Subgraph Mining in a Single Large Graph Using Spark , 2018 .

[30]  Philip S. Yu,et al.  Identifying HIV-induced subgraph patterns in brain networks with side information , 2015, Brain Informatics.

[31]  Panos Kalnis,et al.  GRAMI: Frequent Subgraph and Pattern Mining in a Single Large Graph , 2014, Proc. VLDB Endow..

[32]  Liang Chen,et al.  Multi-modal classification of Alzheimer's disease using nonlinear graph fusion , 2017, Pattern Recognit..

[33]  Yizhou Sun,et al.  Convolutional Set Matching for Graph Similarity , 2018, ArXiv.

[34]  Pierre Vandergheynst,et al.  Geodesic Convolutional Neural Networks on Riemannian Manifolds , 2015, 2015 IEEE International Conference on Computer Vision Workshop (ICCVW).

[35]  Razvan Pascanu,et al.  Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.

[36]  Philip S. Yu,et al.  Structural Deep Brain Network Mining , 2017, KDD.

[37]  Boris Cule,et al.  Grasping frequent subgraph mining for bioinformatics applications , 2018, BioData Mining.

[38]  Sebastian Nowozin,et al.  gBoost: a mathematical programming approach to graph classification and regression , 2009, Machine Learning.

[39]  Donald F. Towsley,et al.  Diffusion-Convolutional Neural Networks , 2015, NIPS.