暂无分享,去创建一个
Sungroh Yoon | Ho Bae | Jaehee Jang | Dahuin Jung | Hyemi Jang | Heonseok Ha | Sungroh Yoon | Ho Bae | Heonseok Ha | Jaehee Jang | Dahuin Jung | Hyemi Jang
[1] Pin-Yu Chen,et al. Attacking the Madry Defense Model with L1-based Adversarial Examples , 2017, ICLR.
[2] Hassan Takabi,et al. CryptoDL: Deep Neural Networks over Encrypted Data , 2017, ArXiv.
[3] Shiho Moriai,et al. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.
[4] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[5] Chen Yan. Can You Trust Autonomous Vehicles : Contactless Attacks against Sensors of Self-driving Vehicle , 2016 .
[6] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[7] Nicolas Gama,et al. Faster Fully Homomorphic Encryption: Bootstrapping in Less Than 0.1 Seconds , 2016, ASIACRYPT.
[8] Alexandros G. Dimakis,et al. The Robust Manifold Defense: Adversarial Training using Generative Models , 2017, ArXiv.
[9] Howie Choset,et al. Adversary A3C for Robust Reinforcement Learning , 2019, ArXiv.
[10] Silvio Micali,et al. Probabilistic encryption & how to play mental poker keeping secret all partial information , 1982, STOC '82.
[11] Luis Muñoz-González,et al. Poisoning Attacks with Generative Adversarial Nets , 2019, ArXiv.
[12] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[13] Cordelia Schmid,et al. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.
[14] Aleksander Madry,et al. Clean-Label Backdoor Attacks , 2018 .
[15] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[16] Dejing Dou,et al. Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning , 2017, 2017 IEEE International Conference on Data Mining (ICDM).
[17] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .
[18] Yang Zhang,et al. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning , 2019, USENIX Security Symposium.
[19] Lin Li,et al. Co-training an Improved Recurrent Neural Network with Probability Statistic Models for Named Entity Recognition , 2017, DASFAA.
[20] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[21] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[22] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[23] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[24] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[25] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[26] Michael Naehrig,et al. Improved Security for a Ring-Based Fully Homomorphic Encryption Scheme , 2013, IMACC.
[27] Pascal Paillier,et al. Public-Key Cryptosystems Based on Composite Degree Residuosity Classes , 1999, EUROCRYPT.
[28] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[30] Cynthia Dwork,et al. Differential Privacy: A Survey of Results , 2008, TAMC.
[31] Luis Muñoz-González,et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.
[32] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[33] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[34] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[35] Dejing Dou,et al. Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction , 2016, AAAI.
[36] Vitaly Shmatikov,et al. Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[37] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[38] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[39] Michael Naehrig,et al. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.
[40] Carl A. Gunter,et al. Towards Measuring Membership Privacy , 2017, ArXiv.
[41] Anantha Chandrakasan,et al. Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..
[42] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[43] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[44] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Human and Computer Vision , 2018, ArXiv.
[45] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[46] Fei Wang,et al. Differentially Private Generative Adversarial Network , 2018, ArXiv.
[47] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[48] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[49] Hyun-seok Min,et al. Quantitative Phase Imaging and Artificial Intelligence: A Review , 2018, IEEE Journal of Selected Topics in Quantum Electronics.
[50] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[51] Patrick D. McDaniel,et al. Achieving Secure and Differentially Private Computations in Multiparty Settings , 2017, 2017 IEEE Symposium on Privacy-Aware Computing (PAC).
[52] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.
[53] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[54] Yishay Mansour,et al. Domain Adaptation: Learning Bounds and Algorithms , 2009, COLT.
[55] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[56] Giovanni Felici,et al. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers , 2013, Int. J. Secur. Networks.
[57] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[58] Honglak Lee,et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.
[59] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[60] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[61] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[62] Moni Naor,et al. Our Data, Ourselves: Privacy Via Distributed Noise Generation , 2006, EUROCRYPT.
[63] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[64] Lei Zhang,et al. Generalization Bounds for Domain Adaptation , 2012, NIPS.
[65] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[66] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Craig Gentry,et al. A fully homomorphic encryption scheme , 2009 .
[68] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[69] Thomas Steinke,et al. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds , 2016, TCC.
[70] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[71] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[72] Calton Pu,et al. Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[73] L. Rudin,et al. Nonlinear total variation based noise removal algorithms , 1992 .
[74] Claude Castelluccia,et al. Differentially Private Mixture of Generative Neural Networks , 2019, IEEE Transactions on Knowledge and Data Engineering.
[75] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[76] Minhui Xue,et al. Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization , 2019 .
[77] Melissa Chase,et al. Private Collaborative Neural Network Learning , 2017, IACR Cryptol. ePrint Arch..
[78] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[79] Josh Benaloh,et al. Dense Probabilistic Encryption , 1999 .
[80] Lalu Banoth,et al. A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection , 2017 .
[81] Binghui Wang,et al. Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[82] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[83] Wenqi Wei,et al. Demystifying Membership Inference Attacks in Machine Learning as a Service , 2019, IEEE Transactions on Services Computing.
[84] Úlfar Erlingsson,et al. Scalable Private Learning with PATE , 2018, ICLR.
[85] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[86] Daniel Rueckert,et al. A generic framework for privacy preserving deep learning , 2018, ArXiv.
[87] Farinaz Koushanfar,et al. DeepSecure: Scalable Provably-Secure Deep Learning , 2017, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).
[88] Craig Gentry,et al. Fully Homomorphic Encryption over the Integers , 2010, EUROCRYPT.
[89] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[90] Suman Jana,et al. DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[91] Mauro Barni,et al. A privacy-preserving protocol for neural-network-based computation , 2006, MM&Sec '06.
[92] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[93] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[94] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[95] Pramod Viswanath,et al. The Composition Theorem for Differential Privacy , 2013, IEEE Transactions on Information Theory.
[96] Jia Chen,et al. A Collaborative Privacy-Preserving Deep Learning System in Distributed Mobile Environment , 2016, 2016 International Conference on Computational Science and Computational Intelligence (CSCI).
[97] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[98] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[99] Sungroh Yoon,et al. TensorLightning: A Traffic-Efficient Distributed Deep Learning on Commodity Spark Clusters , 2018, IEEE Access.
[100] T. Elgamal. A public key cryptosystem and a signature scheme based on discrete logarithms , 1984, CRYPTO 1984.
[101] Cynthia Dwork,et al. Practical privacy: the SuLQ framework , 2005, PODS.
[102] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[103] Yao Lu,et al. Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..
[104] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[105] Paul Barford,et al. Data Poisoning Attacks against Autoregressive Models , 2016, AAAI.
[106] Guy N. Rothblum,et al. Boosting and Differential Privacy , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.
[107] Farinaz Koushanfar,et al. Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..
[108] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[109] Terrance E. Boult,et al. Assessing Threat of Adversarial Examples on Deep Neural Networks , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).
[110] Luis Muñoz-González,et al. Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[111] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[112] Rong Jin,et al. Efficient Kernel Clustering Using Random Fourier Features , 2012, 2012 IEEE 12th International Conference on Data Mining.
[113] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[114] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[115] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[116] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[117] Masahiro Yagisawa,et al. Fully Homomorphic Encryption without bootstrapping , 2015, IACR Cryptol. ePrint Arch..
[118] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[119] Craig Gentry,et al. (Leveled) Fully Homomorphic Encryption without Bootstrapping , 2014, ACM Trans. Comput. Theory.
[120] Pascal Paillier,et al. Fast Homomorphic Evaluation of Deep Discretized Neural Networks , 2018, IACR Cryptol. ePrint Arch..
[121] David P. Williamson,et al. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming , 1995, JACM.
[122] Kamalika Chaudhuri,et al. Privacy-preserving logistic regression , 2008, NIPS.
[123] Parisa Rashidi,et al. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis , 2017, IEEE Journal of Biomedical and Health Informatics.
[124] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[125] Constance Morel,et al. Privacy-Preserving Classification on Deep Neural Network , 2017, IACR Cryptol. ePrint Arch..
[126] Vinod Vaikuntanathan,et al. Efficient Fully Homomorphic Encryption from (Standard) LWE , 2011, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science.
[127] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[128] Yehuda Lindell,et al. Secure Multiparty Computation for Privacy-Preserving Data Mining , 2009, IACR Cryptol. ePrint Arch..
[129] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[130] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[131] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[132] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[133] Tom Goldstein,et al. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets , 2019, ICML.
[134] Saibal Mukhopadhyay,et al. Cascade Adversarial Machine Learning Regularized with a Unified Embedding , 2017, ICLR.
[135] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Computer Vision and Time-Limited Humans , 2018, NeurIPS.
[136] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[137] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[138] A. Yao,et al. Fair exchange with a semi-trusted third party (extended abstract) , 1997, CCS '97.
[139] B. Barak. Fully Homomorphic Encryption and Post Quantum Cryptography , 2010 .
[140] Elad Hazan,et al. Introduction to Online Convex Optimization , 2016, Found. Trends Optim..
[141] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[142] Peter Richtárik,et al. Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.
[143] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[144] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[145] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[146] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[147] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[148] Aleksander Madry,et al. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.
[149] Tom Schaul,et al. Natural Evolution Strategies , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).
[150] Vitaly Shmatikov,et al. Auditing Data Provenance in Text-Generation Models , 2018, KDD.
[151] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[152] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[153] Varun Kanade,et al. TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service , 2018, ICML.
[154] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[155] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[156] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[157] Payman Mohassel,et al. SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).
[158] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[159] Chinmay Hegde,et al. Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[160] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[161] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[162] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[163] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[164] Mei-Yuh Hwang,et al. Training Augmentation with Adversarial Examples for Robust Speech Recognition , 2018, INTERSPEECH.
[165] Boi Faltings,et al. Generating Differentially Private Datasets Using GANs , 2018, ArXiv.
[166] Léo Ducas,et al. FHEW: Bootstrapping Homomorphic Encryption in Less Than a Second , 2015, EUROCRYPT.
[167] Yang Song,et al. Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.
[168] Bradley C. Love,et al. Optimal Teaching for Limited-Capacity Human Learners , 2014, NIPS.
[169] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[170] Yoshua Bengio,et al. Multi-Prediction Deep Boltzmann Machines , 2013, NIPS.
[171] Ali Taylan Cemgil,et al. Differentially Private Variational Dropout , 2017, ArXiv.
[172] Mauro Barni,et al. Oblivious Neural Network Computing via Homomorphic Encryption , 2007, EURASIP J. Inf. Secur..
[173] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[174] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[175] Alexander J. Smola,et al. Scaling Distributed Machine Learning with the Parameter Server , 2014, OSDI.
[176] Taher El Gamal. A public key cryptosystem and a signature scheme based on discrete logarithms , 1984, IEEE Trans. Inf. Theory.
[177] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[178] Thomas Brox,et al. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.
[179] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[180] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[181] Dejing Dou,et al. Preserving differential privacy in convolutional deep belief networks , 2017, Machine Learning.
[182] Andrew L. Beam,et al. Adversarial Attacks Against Medical Deep Learning Systems , 2018, ArXiv.
[183] Chi-Wing Fu,et al. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes , 2018, IEEE Transactions on Medical Imaging.
[184] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[185] Cynthia Dwork,et al. Differential privacy and robust statistics , 2009, STOC '09.