Detecting Operational Adversarial Examples for Reliable Deep Learning
暂无分享,去创建一个
[1] Bev Littlewood,et al. Software reliability and dependability: a roadmap , 2000, ICSE '00.
[2] Stefano Russo,et al. Operation is the Hardest Teacher: Estimating DNN Accuracy Looking for Mispredictions , 2021, 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE).
[3] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[4] Bev Littlewood,et al. Guidelines for Statistical Testing , 1997 .
[5] Bev Littlewood,et al. Evaluating Testing Methods by Delivered Reliability , 1998, IEEE Trans. Software Eng..
[6] Junfeng Yang,et al. DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.
[7] Edward N. Adams,et al. Optimizing Preventive Service of Software Products , 1984, IBM J. Res. Dev..
[8] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[9] John D. Musa,et al. Operational profiles in software-reliability engineering , 1993, IEEE Software.
[10] Miryung Kim,et al. Is neuron coverage a meaningful measure for testing deep neural networks? , 2020, ESEC/SIGSOFT FSE.
[11] Sarfraz Khurshid,et al. DeepRoad: GAN-Based Metamorphic Testing and Input Validation Framework for Autonomous Driving Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).