Testing DNN Image Classifiers for Confusion & Bias Errors
暂无分享,去创建一个
[1] Guy N. Rothblum,et al. Fairness Through Computationally-Bounded Awareness , 2018, NeurIPS.
[2] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[3] Daniel Kroening,et al. Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[4] Premkumar T. Devanbu,et al. Sample size vs. bias in defect prediction , 2013, ESEC/FSE 2013.
[5] Premkumar T. Devanbu,et al. On the "naturalness" of buggy code , 2015, ICSE.
[6] Sarfraz Khurshid,et al. DeepRoad: GAN-based Metamorphic Autonomous Driving System Testing , 2018, ArXiv.
[7] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[8] Junfeng Yang,et al. DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.
[9] Junfeng Yang,et al. Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.
[10] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[11] อนิรุธ สืบสิงห์,et al. Data Mining Practical Machine Learning Tools and Techniques , 2014 .
[12] Premkumar T. Devanbu,et al. BugCache for inspections: hit or miss? , 2011, ESEC/FSE '11.
[13] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[14] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[15] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[16] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[18] Ian Goodfellow,et al. TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing , 2018, ICML.
[19] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[20] Hang Su,et al. Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples , 2017, ArXiv.
[21] Jingyi Wang,et al. Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[22] Shin Yoo,et al. Guiding Deep Learning System Testing Using Surprise Adequacy , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[23] Lionel C. Briand,et al. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models , 2010, J. Syst. Softw..
[24] Yuriy Brun,et al. Software fairness , 2018, ESEC/SIGSOFT FSE.
[25] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[26] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[27] Patrick D. McDaniel,et al. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.
[28] Aditya Krishna Menon,et al. The cost of fairness in binary classification , 2018, FAT.
[29] Suman Jana,et al. DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[30] Krishna P. Gummadi,et al. Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.
[31] Lei Ma,et al. DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[32] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[33] Dawn Xiaodong Song,et al. Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[34] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[35] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[37] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[38] Toniann Pitassi,et al. Learning Fair Representations , 2013, ICML.
[39] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[40] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[41] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[42] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[43] Franco Turini,et al. k-NN as an implementation of situation testing for discrimination discovery and prevention , 2011, KDD.
[44] Premkumar T. Devanbu,et al. How, and why, process metrics are better , 2013, 2013 35th International Conference on Software Engineering (ICSE).
[45] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[46] Grigorios Tsoumakas,et al. Multi-Label Classification: An Overview , 2007, Int. J. Data Warehous. Min..
[47] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[48] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[49] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[50] J. Dunning. The elephant in the room. , 2013, European journal of cardio-thoracic surgery : official journal of the European Association for Cardio-thoracic Surgery.
[51] Junfeng Yang,et al. Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems , 2017, ArXiv.
[52] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[53] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[54] Baishakhi Ray,et al. Metric Learning for Adversarial Robustness , 2019, NeurIPS.
[55] Yuriy Brun,et al. Fairness testing: testing software for discrimination , 2017, ESEC/SIGSOFT FSE.
[56] Shai Ben-David,et al. Empirical Risk Minimization under Fairness Constraints , 2018, NeurIPS.
[57] Pooja Kamavisdar,et al. A Survey on Image Classification Approaches and Techniques , 2013 .
[58] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[59] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[60] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[61] Jieyu Zhao,et al. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints , 2017, EMNLP.
[62] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[63] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[64] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[65] Ali Farhadi,et al. Situation Recognition: Visual Semantic Role Labeling for Image Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[66] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Wen-Chuan Lee,et al. MODE: automated neural network model debugging via state differential analysis and input selection , 2018, ESEC/SIGSOFT FSE.
[68] Akito Monden,et al. Revisiting common bug prediction findings using effort-aware models , 2010, 2010 IEEE International Conference on Software Maintenance.
[69] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[70] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[71] Patrick D. McDaniel,et al. Extending Defensive Distillation , 2017, ArXiv.
[72] W. Kruskal,et al. Use of Ranks in One-Criterion Variance Analysis , 1952 .
[73] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[74] Aditya Krishna Menon,et al. Noise-tolerant fair classification , 2019, NeurIPS.
[75] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[76] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[77] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[78] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[79] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[80] Toon Calders,et al. Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.
[81] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[82] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[83] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[84] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[85] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.