暂无分享,去创建一个
Aditya Kanade | Shirish Shevade | Soham Pal | Yash Gupta | S. Shevade | Aditya Kanade | Soham Pal | Yash Gupta
[1] Ian Molloy,et al. Defending Against Neural Network Model Stealing Attacks Using Deceptive Perturbations , 2019, 2019 IEEE Security and Privacy Workshops (SPW).
[2] Samuel Marchal,et al. PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).
[3] Samuel Marchal,et al. Extraction of Complex DNN Models: Real Threat or Boogeyman? , 2019, ArXiv.
[4] Vijay Arya,et al. Model Extraction Warning in MLaaS Paradigm , 2017, ACSAC.
[5] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[6] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[7] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[8] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[9] Vinod Ganapathy,et al. ActiveThief: Model Extraction Using Active Learning and Unannotated Public Data , 2020, AAAI.
[10] Tribhuvanesh Orekondy,et al. Knockoff Nets: Stealing Functionality of Black-Box Models , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[12] Paulina Grnarova,et al. Defending Against Adversarial Attacks by Leveraging an Entire GAN , 2018, ArXiv.
[13] Silvio Savarese,et al. Active Learning for Convolutional Neural Networks: A Core-Set Approach , 2017, ICLR.
[14] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[15] Alberto Ferreira de Souza,et al. Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).
[16] Raghavendra Chalapathy University of Sydney,et al. Deep Learning for Anomaly Detection: A Survey , 2019, ArXiv.
[17] Tribhuvanesh Orekondy,et al. Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks , 2019, ArXiv.
[18] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[19] Johannes Stallkamp,et al. Detection of traffic signs in real-world images: The German traffic sign detection benchmark , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Chengfang Fang,et al. BDPL: A Boundary Differentially Private Layer Against Machine Learning Model Extraction Attacks , 2019, ESORICS.
[22] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[23] William A. Gale,et al. A sequential algorithm for training text classifiers , 1994, SIGIR '94.
[24] Frédéric Precioso,et al. Adversarial Active Learning for Deep Networks: a Margin Based Approach , 2018, ArXiv.
[25] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[26] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[27] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[28] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[29] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[31] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.