Testing DNN Image Classifiers for Confusion & Bias Errors
暂无分享,去创建一个
[1] W. Kruskal,et al. Use of Ranks in One-Criterion Variance Analysis , 1952 .
[2] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[3] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[4] Grigorios Tsoumakas,et al. Multi-Label Classification: An Overview , 2007, Int. J. Data Warehous. Min..
[5] Toon Calders,et al. Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.
[6] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[7] Akito Monden,et al. Revisiting common bug prediction findings using effort-aware models , 2010, 2010 IEEE International Conference on Software Maintenance.
[8] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[9] Lionel C. Briand,et al. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models , 2010, J. Syst. Softw..
[10] Franco Turini,et al. k-NN as an implementation of situation testing for discrimination discovery and prevention , 2011, KDD.
[11] Premkumar T. Devanbu,et al. BugCache for inspections: hit or miss? , 2011, ESEC/FSE '11.
[12] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[13] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Premkumar T. Devanbu,et al. Sample size vs. bias in defect prediction , 2013, ESEC/FSE 2013.
[15] Pooja Kamavisdar,et al. A Survey on Image Classification Approaches and Techniques , 2013 .
[16] J. Dunning. The elephant in the room. , 2013, European journal of cardio-thoracic surgery : official journal of the European Association for Cardio-thoracic Surgery.
[17] Premkumar T. Devanbu,et al. How, and why, process metrics are better , 2013, 2013 35th International Conference on Software Engineering (ICSE).
[18] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[19] อนิรุธ สืบสิงห์,et al. Data Mining Practical Machine Learning Tools and Techniques , 2014 .
[20] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[21] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[25] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[26] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[27] Premkumar T. Devanbu,et al. On the "naturalness" of buggy code , 2015, ICSE.
[28] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[31] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[32] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[33] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Ali Farhadi,et al. Situation Recognition: Visual Semantic Role Labeling for Image Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[36] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[37] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[38] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[39] Krishna P. Gummadi,et al. Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.
[40] Patrick D. McDaniel,et al. Extending Defensive Distillation , 2017, ArXiv.
[41] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[42] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[43] Junfeng Yang,et al. DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.
[44] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[45] Yuriy Brun,et al. Fairness testing: testing software for discrimination , 2017, ESEC/SIGSOFT FSE.
[46] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[47] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[48] Jieyu Zhao,et al. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints , 2017, EMNLP.
[49] Hang Su,et al. Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples , 2017, ArXiv.
[50] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[51] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[52] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[53] Junfeng Yang,et al. Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems , 2017, ArXiv.
[54] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[55] Wen-Chuan Lee,et al. MODE: automated neural network model debugging via state differential analysis and input selection , 2018, ESEC/SIGSOFT FSE.
[56] Shai Ben-David,et al. Empirical Risk Minimization under Fairness Constraints , 2018, NeurIPS.
[57] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[58] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[59] Patrick D. McDaniel,et al. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.
[60] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[61] Dawn Xiaodong Song,et al. Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[62] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[63] Aditya Krishna Menon,et al. The cost of fairness in binary classification , 2018, FAT.
[64] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[65] Yuriy Brun,et al. Software fairness , 2018, ESEC/SIGSOFT FSE.
[66] Daniel Kroening,et al. Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[67] Suman Jana,et al. DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[68] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[69] Guy N. Rothblum,et al. Fairness Through Computationally-Bounded Awareness , 2018, NeurIPS.
[70] Lei Ma,et al. DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[71] Jingyi Wang,et al. Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[72] Ian Goodfellow,et al. TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing , 2018, ICML.
[73] S. Jana,et al. DeepXplore: automated whitebox testing of deep learning systems , 2019, Commun. ACM.
[74] Shin Yoo,et al. Guiding Deep Learning System Testing Using Surprise Adequacy , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[75] Aditya Krishna Menon,et al. Noise-tolerant fair classification , 2019, NeurIPS.
[76] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[77] Metric Learning for Adversarial Robustness , 2019, NeurIPS.
[78] Wallace S. Rutkowski,et al. TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2022 .