Improved resistance of neural networks to adversarial images through generative pre-training
暂无分享,去创建一个
[1] Mohammad Norouzi,et al. Stacks of convolutional Restricted Boltzmann Machines for shift-invariant feature learning , 2009, CVPR.
[2] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[3] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[4] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[6] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[7] Seyed-Mohsen Moosavi-Dezfooli,et al. Divide, Denoise, and Defend against Adversarial Attacks , 2018, ArXiv.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[10] Huichen Lihuichen. DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .
[11] Florent Krzakala,et al. Training Restricted Boltzmann Machines via the Thouless-Anderson-Palmer Free Energy , 2015, NIPS 2015.
[12] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[13] J. Yedidia,et al. How to expand around mean-field theory using high-temperature expansions , 1991 .