A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
暂无分享,去创建一个
Min Wu | Matthew Wicker | Xiaowei Huang | Marta Z. Kwiatkowska | Wenjie Ruan | M. Kwiatkowska | Xiaowei Huang | Matthew Wicker | Wenjie Ruan | Min Wu
[1] Daniel Kroening,et al. Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[2] Tso-Jung Yen,et al. Superpixels Generating from the Pixel-based K-Means Clustering , 2015, J. Multim. Process. Technol..
[3] Matthew Wicker,et al. Feature-Guided Black-Box Safety Testing of Deep Neural Networks , 2017, TACAS.
[4] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[5] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[6] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[7] Sebastian Bittel,et al. Pixel-wise Segmentation of Street with Neural Networks , 2015, ArXiv.
[8] D. Ward,et al. Verification and validation of neural networks for safety-critical applications , 2002, Proceedings of the 2002 American Control Conference (IEEE Cat. No.CH37301).
[9] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[10] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[11] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[12] G LoweDavid,et al. Distinctive Image Features from Scale-Invariant Keypoints , 2004 .
[13] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[14] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[15] John B. Shoven,et al. I , Edinburgh Medical and Surgical Journal.
[16] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[17] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[18] H. Jaap van den Herik,et al. Progressive Strategies for Monte-Carlo Tree Search , 2008 .
[19] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[21] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[22] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[23] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[24] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[25] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[26] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[27] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[28] Risto Miikkulainen,et al. Intrusion Detection with Neural Networks , 1997, NIPS.
[29] R. R. Zakrzewski. Verification of a trained neural network accuracy , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).
[30] Jack W. Stokes,et al. Large-scale malware classification using random projections and neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[31] Zhou Wang,et al. Multiscale structural similarity for image quality assessment , 2003, The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003.
[32] Daniel Kroening,et al. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm , 2018, ArXiv.
[33] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[34] Danna Zhou,et al. d. , 1934, Microbial pathogenesis.
[35] Johannes Stallkamp,et al. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.
[36] Corina S. Pasareanu,et al. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks , 2018, ATVA.
[37] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[38] Luca Pulina,et al. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.
[39] Luc Van Gool,et al. Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..
[40] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[41] Daniel Kroening,et al. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance , 2019, IJCAI.
[42] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[43] Gavin Brown,et al. Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).
[44] Yann LeCun,et al. Traffic sign recognition with multi-scale Convolutional Networks , 2011, The 2011 International Joint Conference on Neural Networks.
[45] Vin de Silva,et al. On the Local Behavior of Spaces of Natural Images , 2007, International Journal of Computer Vision.
[46] Xiaowei Huang,et al. Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.
[47] Corina S. Pasareanu,et al. DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks , 2017, ArXiv.
[48] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .
[49] Csaba Szepesvári,et al. Bandit Based Monte-Carlo Planning , 2006, ECML.