Graph Backdoor
暂无分享,去创建一个
Shouling Ji | Ting Wang | Ren Pang | Zhaohan Xi | S. Ji | Ren Pang | Ting Wang | Zhaohan Xi
[1] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[2] Yunhao Liu,et al. Experiences of landing machine learning onto market-scale mobile malware detection , 2020, EuroSys.
[3] Ryan A. Rossi,et al. The Network Data Repository with Interactive Graph Analytics and Visualization , 2015, AAAI.
[4] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[5] Ting Wang,et al. Model-Reuse Attacks on Deep Learning Systems , 2018, CCS.
[6] Jure Leskovec,et al. Inductive Representation Learning on Large Graphs , 2017, NIPS.
[7] Yanjun Qi,et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.
[8] Jishen Zhao,et al. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks , 2019, IJCAI.
[9] Pietro Liò,et al. Graph Attention Networks , 2017, ICLR.
[10] Wei Song,et al. DeepMem: Learning Graph Neural Network Models for Fast and Robust Memory Forensic Analysis , 2018, CCS.
[11] Mario Vento,et al. A (sub)graph isomorphism algorithm for matching large graphs , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[12] Horst Bunke,et al. A Graph Matching Based Approach to Fingerprint Classification Using Directional Variance , 2005, AVBPA.
[13] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[14] Stephan Günnemann,et al. Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.
[15] Jure Leskovec,et al. Hierarchical Graph Representation Learning with Differentiable Pooling , 2018, NeurIPS.
[16] Deliang Fan,et al. TBT: Targeted Neural Network Attack With Bit Trojan , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Arnold J Stromberg,et al. Subsampling , 2001, Technometrics.
[18] Ting Wang,et al. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[19] Stephan Gunnemann,et al. Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.
[20] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[21] Dan Boneh,et al. SentiNet: Detecting Physical Attacks Against Deep Learning Systems , 2018, ArXiv.
[22] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[23] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[24] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[25] Philip S. Yu,et al. Heterogeneous Graph Matching Networks for Unknown Malware Detection , 2019, IJCAI.
[26] Cho-Jui Hsieh,et al. A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning , 2019, NeurIPS.
[27] Xiapu Luo,et al. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models , 2019, CCS.
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[30] Bao Gia Doan,et al. Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems , 2019, 1908.03369.
[31] Yunfei Liu,et al. Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks , 2020, ECCV.
[32] Sijia Liu,et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.
[33] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[34] FoggiaPasquale,et al. A (Sub)Graph Isomorphism Algorithm for Matching Large Graphs , 2004 .
[35] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[36] Lorenzo Cavallaro,et al. Intriguing Properties of Adversarial ML Attacks in the Problem Space , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[37] Binghui Wang,et al. Attacking Graph-based Classification via Manipulating the Graph Structure , 2019, CCS.
[38] Wen-Chuan Lee,et al. NIC: Detecting Adversarial Samples with Neural Network Invariant Checking , 2019, NDSS.
[39] Stephan Günnemann,et al. Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.
[40] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[41] Jinyuan Jia,et al. Backdoor Attacks to Graph Neural Networks , 2020, SACMAT.
[42] Jure Leskovec,et al. Representation Learning on Graphs: Methods and Applications , 2017, IEEE Data Eng. Bull..
[43] Xiangyu Zhang,et al. ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation , 2019, CCS.
[44] Michael Backes,et al. HideNoSeek: Camouflaging Malicious JavaScript in Benign ASTs , 2019, CCS.
[45] Swapnaja Hiray,et al. Comparative Analysis of Feature Extraction Methods of Malware Detection , 2015 .
[46] Hugo Ceulemans,et al. Large-scale comparison of machine learning methods for drug target prediction on ChEMBL , 2018, Chemical science.
[47] Rik Sarkar,et al. Multi-scale Attributed Node Embedding , 2019, J. Complex Networks.
[48] Paolo Frasconi,et al. Bilevel Programming for Hyperparameter Optimization and Meta-Learning , 2018, ICML.
[49] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[50] D. Sculley,et al. Hidden Technical Debt in Machine Learning Systems , 2015, NIPS.
[51] Tudor Dumitras,et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[52] Damith Chinthana Ranasinghe,et al. STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.
[53] Jure Leskovec,et al. How Powerful are Graph Neural Networks? , 2018, ICLR.
[54] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[55] Thomas Blaschke,et al. The rise of deep learning in drug discovery. , 2018, Drug discovery today.
[56] Le Song,et al. Adversarial Attack on Graph Structured Data , 2018, ICML.
[57] Fei Wang,et al. MoFlow: An Invertible Flow Model for Generating Molecular Graphs , 2020, KDD.
[58] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[59] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[60] Philip S. Yu,et al. Adversarial Defense Framework for Graph Neural Network , 2019, ArXiv.
[61] Ting Wang,et al. Backdoor attacks against learning systems , 2017, 2017 IEEE Conference on Communications and Network Security (CNS).
[62] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.
[63] Jacques Klein,et al. AndroZoo: Collecting Millions of Android Apps for the Research Community , 2016, 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR).
[64] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[65] Ben Y. Zhao,et al. Latent Backdoor Attacks on Deep Neural Networks , 2019, CCS.
[66] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[67] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[68] Benjamin Zi Hao Zhao,et al. Invisible Backdoor Attacks Against Deep Neural Networks , 2019, ArXiv.
[69] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[70] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[71] Lei Zhang,et al. Towards a scalable resource-driven approach for detecting repackaged Android applications , 2014, ACSAC.
[72] Jinyuan Jia,et al. Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation , 2018, NDSS.
[73] Fabio Roli,et al. Poisoning Adaptive Biometric Systems , 2012, SSPR/SPR.
[74] Rok Sosic,et al. Prioritizing network communities , 2018, Nature Communications.
[75] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.