暂无分享,去创建一个
[1] Muhammad Shafique,et al. CANN: Curable Approximations for High-Performance Deep Neural Network Accelerators , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).
[2] Dan Meng,et al. DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks , 2020, ASPLOS.
[3] Tarek Frikha,et al. Defensive approximation: securing CNNs using approximate computing , 2020, ASPLOS.
[4] Gang Qu,et al. Security of Neural Networks from Hardware Perspective: A Survey and Beyond , 2021, 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC).
[5] Fabio Roli,et al. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[6] Muhammad Shafique,et al. An Updated Survey of Efficient Hardware Architectures for Accelerating Deep Convolutional Neural Networks , 2020, Future Internet.
[7] Osman Hasan,et al. Probabilistic Error Analysis of Approximate Adders and Multipliers , 2019, Approximate Circuits.
[8] Muhammad Shafique,et al. TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks , 2019, 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS).
[9] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[10] Iraklis Anagnostopoulos,et al. Positive/Negative Approximate Multipliers for DNN Accelerators , 2021, 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).
[11] Kanad Basu,et al. Exploring Fault-Energy Trade-offs in Approximate DNN Hardware Accelerators , 2021, 2021 22nd International Symposium on Quality Electronic Design (ISQED).
[12] Lukás Sekanina,et al. EvoApproxSb: Library of approximate adders and multipliers for circuit design and benchmarking of approximation methods , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.
[13] Muhammad Shafique,et al. QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks , 2018, 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS).
[14] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[15] Muhammad Shafique,et al. CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks , 2020, IEEE Access.
[16] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[17] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.