DeepDyve: Dynamic Verification for Deep Neural Networks
暂无分享,去创建一个
[1] Huawei Li,et al. Retraining-based timing error mitigation for hardware neural networks , 2015, 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[2] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[3] Yiran Chen,et al. Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.
[4] Raghuraman Krishnamoorthi,et al. Quantizing deep convolutional networks for efficient inference: A whitepaper , 2018, ArXiv.
[5] David A. Patterson,et al. In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).
[6] S. Piche,et al. Robustness of feedforward neural networks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[7] Johannes Stallkamp,et al. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.
[8] Yanzhi Wang,et al. Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).
[9] Fernando Morgado Dias,et al. FTSET-a software tool for fault tolerance evaluation and improvement , 2009, Neural Computing and Applications.
[10] Xuefei Ning,et al. Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).
[11] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[12] Caro Lucas,et al. Relaxed Fault-Tolerant Hardware Implementation of Neural Networks in the Presence of Multiple Transient Errors , 2012, IEEE Transactions on Neural Networks and Learning Systems.
[13] Gerd Ascheid,et al. Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators , 2018, SAFECOMP.
[14] Min Li,et al. D2NN: a fine-grained dual modular redundancy framework for deep neural networks , 2019, ACSAC.
[15] Thierry Moreau,et al. MATIC: Learning around errors for efficient low-voltage neural network accelerators , 2017, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[16] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[17] Xiang Gu,et al. Tolerating Soft Errors in Deep Learning Accelerators with Reliable On-Chip Memory Designs , 2018, 2018 IEEE International Conference on Networking, Architecture and Storage (NAS).
[18] Deliang Fan,et al. Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[19] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[20] Chaitali Chakrabarti,et al. Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Gerd Ascheid,et al. Automated design of error-resilient and hardware-efficient deep neural networks , 2019, Neural Computing and Applications.
[22] Joel Emer,et al. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks , 2016, CARN.
[23] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[24] Shuhei Yamashita,et al. Introduction of ISO 26262 'Road vehicles-Functional safety' , 2012 .
[25] Saman Ghili,et al. Tiny ImageNet Visual Recognition Challenge , 2014 .
[26] Boris Murmann,et al. SRAM voltage scaling for energy-efficient convolutional neural networks , 2017, 2017 18th International Symposium on Quality Electronic Design (ISQED).
[27] Tudor Dumitras,et al. Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks , 2019, USENIX Security Symposium.
[28] Qiang Xu,et al. Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks , 2018, AAAI.
[29] Benjamin W. Wah,et al. Fault tolerant neural networks with hybrid redundancy , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[30] Ya Le,et al. Tiny ImageNet Visual Recognition Challenge , 2015 .
[31] Sylvain Pelissier,et al. Practical Fault Attack against the Ed25519 and EdDSA Signature Schemes , 2017, 2017 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC).
[32] StallkampJ.,et al. 2012 Special Issue , 2012 .
[33] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[34] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[35] Gu-Yeon Wei,et al. Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[36] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[37] Guanpeng Li,et al. Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications , 2017, SC17: International Conference for High Performance Computing, Networking, Storage and Analysis.
[38] Chris Fallin,et al. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors , 2014, 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA).
[39] Masanori Hashimoto,et al. When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies , 2019, 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC).
[40] Fan Yao,et al. DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips , 2020, USENIX Security Symposium.
[41] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[42] Gu-Yeon Wei,et al. Ares: A framework for quantifying the resilience of deep neural networks , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).
[43] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[44] Qiang Xu,et al. Fault injection attack on deep neural network , 2017, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).
[45] Akashi Satoh,et al. Clock glitch generator on SAKURA-G for fault injection attack against a cryptographic circuit , 2016, 2016 IEEE 5th Global Conference on Consumer Electronics.
[46] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[47] Chenchen Liu,et al. Rescuing memristor-based neuromorphic design with high defects , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).
[48] Todd M. Austin,et al. DIVA: a reliable substrate for deep submicron microarchitecture design , 1999, MICRO-32. Proceedings of the 32nd Annual ACM/IEEE International Symposium on Microarchitecture.