Adversarial Attack and Defense on Graph Data: A Survey

Deep neural networks (DNNs) have been widely applied to various applications, including image classification, text generation, audio recognition, and graph data analysis. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Though there are several works about adversarial attack and defense strategies on domains such as images and natural language processing, it is still difficult to directly transfer the learned knowledge to graph data due to its representation structure. Given the importance of graph analysis, an increasing number of studies over the past few years have attempted to analyze the robustness of machine learning models on graph data. Nevertheless, existing research considering adversarial behaviors on graph data often focuses on specific types of attacks with certain assumptions. In addition, each work proposes its own mathematical formulation, which makes the comparison among different methods difficult. Therefore, this review is intended to provide an overall landscape of more than 100 papers on adversarial attack and defense strategies for graph data, and establish a unified formulation encompassing most graph adversarial learning models. Moreover, we also compare different graph attacks and defenses along with their contributions and limitations, as well as summarize the evaluation metrics, datasets and future trends. We hope this survey can help fill the gap in the literature and facilitate further development of this promising new field1.

[1]  Xiaojun Xu,et al.  EDoG: Adversarial Edge Detection For Graph Neural Networks , 2022, 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML).

[2]  Xiapu Luo,et al.  Structural Attack against Graph Based Android Malware Detection , 2021, CCS.

[3]  John D. Kelleher,et al.  Poisoning Knowledge Graph Embeddings via Relation Inference Patterns , 2021, ACL.

[4]  Jiarong Xu,et al.  Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning , 2021, NeurIPS Datasets and Benchmarks.

[5]  John D. Kelleher,et al.  Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods , 2021, EMNLP.

[6]  Michael A. Osborne,et al.  Adversarial Attacks on Graph Classification via Bayesian Optimisation , 2021, ArXiv.

[7]  Aleksandar Bojchevski,et al.  Robustness of Graph Neural Networks at Scale , 2021, NeurIPS.

[8]  Jiahao Yu,et al.  Speedup Robust Graph Structure Learning with Low-Rank Information , 2021, CIKM.

[9]  Xingliang Yuan,et al.  Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks , 2021, CIKM.

[10]  Xueqi Cheng,et al.  Single Node Injection Attack against Graph Neural Networks , 2021, CIKM.

[11]  Qi Li,et al.  A Hard Label Black-box Adversarial Attack Against Graph Neural Networks , 2021, CCS.

[12]  Zibin Zheng,et al.  Understanding Structural Vulnerability in Graph Convolutional Networks , 2021, IJCAI.

[13]  Jiliang Tang,et al.  Elastic Graph Neural Networks , 2021, ICML.

[14]  Evgeny Kharlamov,et al.  TDGIA: Effective Injection Attacks on Graph Neural Networks , 2021, KDD.

[15]  Ruoming Jin,et al.  Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination , 2021, WWW.

[16]  Viresh Gupta,et al.  Adversarial Attack on Network Embeddings via Supervised Network Poisoning , 2021, PAKDD.

[17]  Xiangliang Zhang,et al.  Graph Embedding for Recommendation against Attribute Inference Attacks , 2021, WWW.

[18]  Yizhou Sun,et al.  Unsupervised Adversarially Robust Representation Learning on Graphs , 2020, AAAI.

[19]  Jiliang Tang,et al.  Node Similarity Preserving Graph Convolutional Networks , 2020, WSDM.

[20]  Ryan A. Rossi,et al.  Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation , 2020, ArXiv.

[21]  Geoffrey J. Gordon,et al.  Information Obfuscation of Graph Neural Networks , 2020, ICML.

[22]  Jinyuan Jia,et al.  Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation , 2020, KDD.

[23]  Shouling Ji,et al.  Graph Backdoor , 2020, USENIX Security Symposium.

[24]  Jinyuan Jia,et al.  Backdoor Attacks to Graph Neural Networks , 2020, SACMAT.

[25]  Daniel Zügner,et al.  Adversarial Attacks on Graph Neural Networks , 2020, ACM Trans. Knowl. Discov. Data.

[26]  Ao Zhang,et al.  DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder , 2020, ArXiv.

[27]  Xiang Zhang,et al.  GNNGuard: Defending Graph Neural Networks against Adversarial Attacks , 2020, NeurIPS.

[28]  Philip S. Yu,et al.  Robust Spammer Detection by Nash Reinforcement Learning , 2020, KDD.

[29]  Qiaozhu Mei,et al.  Towards More Practical Adversarial Attacks on Graph Neural Networks , 2020, NeurIPS.

[30]  Bo Zeng,et al.  Adversarial Attack on Hierarchical Graph Pooling Neural Networks , 2020, ArXiv.

[31]  Suhang Wang,et al.  Graph Structure Learning for Robust Graph Neural Networks , 2020, KDD.

[32]  Zi Huang,et al.  GCN-Based User Representation Learning for Unifying Robust Recommendation and Fraudster Detection , 2020, SIGIR.

[33]  Michael Backes,et al.  Stealing Links from Graph Neural Networks , 2020, USENIX Security Symposium.

[34]  Dorina Thanou,et al.  On The Stability of Polynomial Spectral Graph Filters , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[35]  Tsunenori Mine,et al.  A Robust Hierarchical Graph Convolutional Network Model for Collaborative Filtering , 2020, ArXiv.

[36]  Minnan Luo,et al.  Scalable attack on graph data by injecting vicious nodes , 2020, Data Mining and Knowledge Discovery.

[37]  Yuchen Li,et al.  On the Robustness of Cascade Diffusion under Node Attacks , 2020, WWW.

[38]  Udi Weinsberg,et al.  Friend or Faux: Graph-Based Early Detection of Fake Accounts on Social Networks , 2020, WWW.

[39]  Chetan Kumar,et al.  Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks , 2020, AAAI.

[40]  Miklós Z. Rácz,et al.  Network disruption: maximizing disagreement and polarization in social networks , 2020, ArXiv.

[41]  Eva Tardos,et al.  Adversarial Perturbations of Opinion Dynamics in Networks , 2020, EC.

[42]  G. Giannakis,et al.  Tensor Graph Convolutional Networks for Multi-Relational and Robust Learning , 2020, IEEE Transactions on Signal Processing.

[43]  Tina Eliassi-Rad,et al.  Topological Effects on Attacks Against Vertex Classification , 2020, ArXiv.

[44]  Zibin Zheng,et al.  A Survey of Adversarial Learning on Graphs , 2020, ArXiv.

[45]  Jiliang Tang,et al.  Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study , 2020, ArXiv.

[46]  Jinyin Chen,et al.  MGA: Momentum Gradient Attack on Network , 2020, IEEE Transactions on Computational Social Systems.

[47]  知秀 柴田 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .

[48]  Jie Chen,et al.  Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models , 2020, IJCAI.

[49]  Jinyuan Jia,et al.  Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing , 2020, WWW.

[50]  Zhongyuan Ruan,et al.  Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria , 2020, ArXiv.

[51]  Honglei Zhang,et al.  Adversarial Attack on Community Detection by Hiding Individuals , 2020, WWW.

[52]  Saba A. Al-Sayouri,et al.  All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs , 2020, WSDM.

[53]  Sivasankaran Rajamanickam,et al.  How Robust Are Graph Neural Networks to Structural Noise? , 2019, ArXiv.

[54]  Tsubasa Takahashi,et al.  Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[55]  Qi Xuan,et al.  Time-Aware Gradient Attack on Dynamic Network Link Prediction , 2019, IEEE Transactions on Knowledge and Data Engineering.

[56]  Cho-Jui Hsieh,et al.  GraphDefense: Towards Robust Graph Convolutional Networks , 2019, ArXiv.

[57]  Yanfang Ye,et al.  Key Player Identification in Underground Forums over Attributed Heterogeneous Information Network Embedding Framework , 2019, CIKM.

[58]  Yanfang Ye,et al.  αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model , 2019, CIKM.

[59]  Stephan Günnemann,et al.  Certifiable Robustness to Graph Perturbations , 2019, NeurIPS.

[60]  P. Dey,et al.  Manipulating Node Similarity Measures in Network , 2019, AAMAS.

[61]  Lihong Chen,et al.  Multiscale Evolutionary Perturbation Attack on Community Detection , 2019, IEEE Transactions on Computational Social Systems.

[62]  Georgios B. Giannakis,et al.  GraphSAC: Detecting anomalies in large-scale graphs , 2019, ArXiv.

[63]  Georgios B. Giannakis,et al.  Edge Dithering for Robust Adaptive Graph Convolutional Networks , 2019, ArXiv.

[64]  Tomasz P. Michalak,et al.  Adversarial Robustness of Similarity-Based Link Prediction , 2019, 2019 IEEE International Conference on Data Mining (ICDM).

[65]  Dong Li,et al.  Spam Review Detection with Graph Convolutional Networks , 2019, CIKM.

[66]  Prasenjit Mitra,et al.  Transferring Robustness for Graph Neural Network Against Poisoning Attacks , 2019, WSDM.

[67]  Wenbing Huang,et al.  A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models , 2019, AAAI.

[68]  Jie Chen,et al.  Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics , 2019, ArXiv.

[69]  Wenwu Zhu,et al.  Robust Graph Convolutional Networks Against Adversarial Attacks , 2019, KDD.

[70]  Yuan Qi,et al.  Cash-Out User Detection Based on Attributed Heterogeneous Information Network with a Hierarchical Attention Mechanism , 2019, AAAI.

[71]  Stephan Günnemann,et al.  Certifiable Robustness and Robust Training for Graph Convolutional Networks , 2019, KDD.

[72]  Sijia Liu,et al.  Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.

[73]  Suhang Wang,et al.  Attacking Graph Convolutional Networks via Rewiring , 2019, ArXiv.

[74]  Qi Xuan,et al.  Unsupervised Euclidean Distance Attack on Network Embedding , 2019, 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC).

[75]  William L. Hamilton,et al.  Generalizable Adversarial Attacks Using Generative Models , 2019, ArXiv.

[76]  William L. Hamilton,et al.  Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling , 2019, 1905.10864.

[77]  Wenwu Zhu,et al.  Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering , 2019, ArXiv.

[78]  Qiang Li,et al.  Adversarial Training Methods for Network Embedding , 2019, WWW.

[79]  Philip S. Yu,et al.  Adversarial Defense Framework for Graph Neural Network , 2019, ArXiv.

[80]  Carey E. Priebe,et al.  Vertex Nomination, Consistent Estimation, and Adversarial Modification , 2019, Electronic Journal of Statistics.

[81]  Chenglin Miao,et al.  Data Poisoning Attack against Knowledge Graph Embedding , 2019, IJCAI.

[82]  Chenglin Miao,et al.  Towards Data Poisoning Attack against Knowledge Graph Embedding , 2019, ArXiv.

[83]  Xiang Lin,et al.  Can Adversarial Network Attack be Defended? , 2019, ArXiv.

[84]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[85]  Binghui Wang,et al.  Attacking Graph-based Classification via Manipulating the Graph Structure , 2019, CCS.

[86]  Zhanxing Zhu,et al.  Virtual Adversarial Training on Graph Convolutional Networks in Node Classification , 2019, PRCV.

[87]  Jun Zhu,et al.  Batch Virtual Adversarial Training for Graph Convolutional Networks , 2019, AI Open.

[88]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[89]  Tat-Seng Chua,et al.  Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure , 2019, IEEE Transactions on Knowledge and Data Engineering.

[90]  Kilian Q. Weinberger,et al.  Simplifying Graph Convolutional Networks , 2019, ICML.

[91]  Fei Wang,et al.  Deep learning for healthcare: review, opportunities and challenges , 2018, Briefings Bioinform..

[92]  Sameer Singh,et al.  Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications , 2018, NAACL.

[93]  Qi Xuan,et al.  GA-Based Q-Attack on Community Detection , 2018, IEEE Transactions on Computational Social Systems.

[94]  Ziqiang Shi,et al.  Link Prediction Adversarial Attack , 2018, ArXiv.

[95]  Pietro Liò,et al.  Deep Graph Infomax , 2018, ICLR.

[96]  Mingjie Sun,et al.  Data Poisoning Attack against Unsupervised Node Embedding Methods , 2018, ArXiv.

[97]  Cho-Jui Hsieh,et al.  Attack Graph Convolutional Networks by Adding Fake Nodes , 2018, ArXiv.

[98]  Le Song,et al.  Characterizing Malicious Edges targeting on Graph Neural Networks , 2018 .

[99]  Talal Rahwan,et al.  Attacking Similarity-Based Link Prediction in Social Networks , 2018, AAMAS.

[100]  Qi Xuan,et al.  Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations , 2018, IEEE Transactions on Knowledge and Data Engineering.

[101]  Jia Liu,et al.  Poisoning Attacks to Graph-Based Recommender Systems , 2018, ACSAC.

[102]  Qi Xuan,et al.  Fast Gradient Attack on Network Embedding , 2018, ArXiv.

[103]  Partha Pratim Talukdar,et al.  HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs , 2018 .

[104]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[105]  Talal Rahwan,et al.  Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network , 2018, ArXiv.

[106]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[107]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[108]  Huan Ling,et al.  Adversarial Contrastive Estimation , 2018, ACL.

[109]  Samuel R. Bowman,et al.  GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.

[110]  Stephan Günnemann,et al.  NetGAN: Generating Graphs via Random Walks , 2018, ICML.

[111]  Rama Chellappa,et al.  Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.

[112]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[113]  Mingyan Liu,et al.  Spatially Transformed Adversarial Examples , 2018, ICLR.

[114]  Matthias Bethge,et al.  Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.

[115]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[116]  Richard G. Baraniuk,et al.  Deep Neural Networks , 2017 .

[117]  Philip S. Yu,et al.  Sequential Keystroke Behavioral Biometrics for Mobile User Identification via Multi-view Deep Learning , 2017, ECML/PKDD.

[118]  Yizheng Chen,et al.  Practical Attacks Against Graph-based Clustering , 2017, CCS.

[119]  Cyrus Shahabi,et al.  Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting , 2017, ICLR.

[120]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[121]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[122]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[123]  Shin Ishii,et al.  Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[124]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[125]  Bram van Ginneken,et al.  A survey on deep learning in medical image analysis , 2017, Medical Image Anal..

[126]  Dawn Xiaodong Song,et al.  Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.

[127]  Christopher Leckie,et al.  High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning , 2016, Pattern Recognit..

[128]  Witawas Srisa-an,et al.  SigPID: significant permission identification for android malware detection , 2016, 2016 11th International Conference on Malicious and Unwanted Software (MALWARE).

[129]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[130]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[131]  Talal Rahwan,et al.  Hiding individuals and communities in a social network , 2016, Nature Human Behaviour.

[132]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[133]  Ruslan Salakhutdinov,et al.  Revisiting Semi-Supervised Learning with Graph Embeddings , 2016, ICML.

[134]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[135]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[136]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[137]  Alán Aspuru-Guzik,et al.  Convolutional Networks on Graphs for Learning Molecular Fingerprints , 2015, NIPS.

[138]  Leman Akoglu,et al.  Collective Opinion Spam Detection: Bridging Review Networks and Metadata , 2015, KDD.

[139]  B. Frey,et al.  The human splicing code reveals new insights into the genetic determinants of disease , 2015, Science.

[140]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[141]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[142]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[143]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[144]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[145]  Pedro F. Miret,et al.  Wikipedia , 2008, Monatsschrift für Deutsches Recht.

[146]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[147]  Kaspar Riesen,et al.  IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning , 2008, SSPR/SPR.

[148]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[149]  Jie Tang,et al.  ArnetMiner: extraction and mining of academic social networks , 2008, KDD.

[150]  J. Leskovec,et al.  Graph evolution: Densification and shrinking diameters , 2006, TKDD.

[151]  Lada A. Adamic,et al.  The political blogosphere and the 2004 U.S. election: divided they blog , 2005, LinkKDD '05.

[152]  Jeffrey J. Sutherland,et al.  Spline-Fitting with a Genetic Algorithm: A Method for Developing Classification Structure-Activity Relationships , 2003, J. Chem. Inf. Comput. Sci..

[153]  D. Lusseau,et al.  The bottlenose dolphin community of Doubtful Sound features a large proportion of long-lasting associations , 2003, Behavioral Ecology and Sociobiology.

[154]  M E J Newman,et al.  Community structure in social and biological networks , 2001, Proceedings of the National Academy of Sciences of the United States of America.

[155]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.

[156]  W. Zachary,et al.  An Information Flow Model for Conflict and Fission in Small Groups , 1977, Journal of Anthropological Research.

[157]  Jiliang Tang,et al.  Graph Neural Networks with Adaptive Residual , 2021, NeurIPS.

[158]  Yatao Bian,et al.  Not All Low-Pass Filters are Robust in Graph Convolutional Networks , 2021, NeurIPS.

[159]  Yelong Shen,et al.  Integrated Defense for Resilient Graph Matching , 2021, ICML.

[160]  Mark Coates,et al.  Detection and Defense of Topological Adversarial Attacks on Graphs , 2021, AISTATS.

[161]  Dejing Dou,et al.  Adversarial Attack against Cross-lingual Knowledge Graph Alignment , 2021, EMNLP.

[162]  Ruoming Jin,et al.  Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks , 2021, ICML.

[163]  Suhang Wang,et al.  Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach , 2020 .

[164]  Yang Zhou,et al.  Adversarial Attacks on Deep Graph Matching , 2020, NeurIPS.

[165]  Sourav Medya,et al.  Manipulating Node Similarity Measures in Network , 2019, ArXiv.

[166]  Alexander J. Gomez,et al.  Improving Robustness to Attacks Against Vertex Classification , 2019 .

[167]  Hongwei Jin,et al.  Latent Adversarial Training of Graph Convolution Networks , 2019 .

[168]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[169]  Henriette Violante Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications , 2019 .

[170]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[171]  Philip S. Yu,et al.  Adversarial Attack and Defense on Graph Data: A Survey , 2018 .

[172]  M. Melamed,et al.  Detection , 2018, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..

[173]  Christos Faloutsos,et al.  ZooBP: Belief Propagation for Heterogeneous Networks , 2017, Proc. VLDB Endow..

[174]  Philip S. Yu,et al.  A Survey of Heterogeneous Information Network Analysis , 2015 .

[175]  Ankur P. Parikh,et al.  Algorithms for Graph Similarity and Subgraph Matching , 2011 .

[176]  Kai Ming Ting,et al.  Confusion Matrix , 2010, Encyclopedia of Machine Learning and Data Mining.

[177]  Nick Craswell Mean Reciprocal Rank , 2009, Encyclopedia of Database Systems.

[178]  Yi Zhang,et al.  Average Precision , 2009, Encyclopedia of Database Systems.

[179]  Lise Getoor,et al.  Collective Classi!cation in Network Data , 2008 .

[180]  Peter E. Latham,et al.  Mutual Information , 2006 .

[181]  Valdis E. Krebs,et al.  Mapping Networks of Terrorist Cells , 2001 .

[182]  A. Barabasi,et al.  Emergence of Scaling in Random Networks , 1999 .