Adversarial Robust Model Compression using In-Train Pruning
暂无分享,去创建一个
Walter Stechele | Christian Wressnegger | Nael Fasfous | Christian Unger | Alexander Frickenstein | Lukas Frickenstein | Anmol Singh | Qi Zhao | Manoj-Rohit Vemparala | Sreetama Sarkar | Sabine Kuhn | Naveen-Shankar Nagaraja | N. Nagaraja | Christian Wressnegger | C. Unger | Nael Fasfous | W. Stechele | Alexander Frickenstein | Qi Zhao | M. Vemparala | Sreetama Sarkar | Lukas Frickenstein | Anmol Singh | Sabine Kuhn
[1] George Papandreou,et al. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation , 2018, ECCV.
[2] Zhangyang Wang,et al. Adversarially Trained Model Compression: When Robustness Meets Efficiency , 2019, ArXiv.
[3] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[4] Suman Jana,et al. HYDRA: Pruning Adversarially Robust Neural Networks , 2020, NeurIPS.
[5] Peter Stone,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.
[6] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[7] Wei Pan,et al. Towards Accurate Binary Convolutional Neural Network , 2017, NIPS.
[8] Walter Stechele,et al. DSC: Dense-Sparse Convolution for Vectorized Inference of Convolutional Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[9] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[10] Trevor Darrell,et al. Deep Layer Aggregation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[11] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[12] Song Han,et al. AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.
[13] Luke Zettlemoyer,et al. Sparse Networks from Scratch: Faster Training without Losing Performance , 2019, ArXiv.
[14] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Walter Stechele,et al. BreakingBED - Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks , 2021, IntelliSys.
[17] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[18] Yanzhi Wang,et al. A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers , 2018, ECCV.
[19] Erich Elsen,et al. Rigging the Lottery: Making All Tickets Winners , 2020, ICML.
[20] Song Han,et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[21] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[22] Andreas Geiger,et al. Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..
[23] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[24] Hao Cheng,et al. Adversarial Robustness vs. Model Compression, or Both? , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[25] Swagath Venkataramani,et al. PACT: Parameterized Clipping Activation for Quantized Neural Networks , 2018, ArXiv.
[26] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[27] Jiayu Li,et al. ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs , 2018, ArXiv.
[28] Walter Stechele,et al. ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks , 2020, 2020 57th ACM/IEEE Design Automation Conference (DAC).
[29] Medhat A. Moussa,et al. Attacking Binarized Neural Networks , 2017, ICLR.
[30] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[31] Suya You,et al. Learning to Prune Filters in Convolutional Neural Networks , 2018, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[32] Xingyi Zhou,et al. Objects as Points , 2019, ArXiv.
[33] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.