暂无分享,去创建一个
Minlie Huang | Kaidi Xu | Bhavya Kailkhura | Xue Lin | Huan Zhang | Cho-Jui Hsieh | Kai-Wei Chang | Zhouxing Shi | Cho-Jui Hsieh | Kai-Wei Chang | Minlie Huang | X. Lin | B. Kailkhura | Huan Zhang | Kaidi Xu | Zhouxing Shi
[1] Yizheng Chen,et al. MixTrain: Scalable Training of Formally Robust Neural Networks , 2018, ArXiv.
[2] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[3] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[4] Yang Yuan,et al. Asymmetric Valleys: Beyond Sharp and Flat Local Minima , 2019, NeurIPS.
[5] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[6] Cho-Jui Hsieh,et al. Enhancing Certifiable Robustness via a Deep Model Ensemble , 2019, ArXiv.
[7] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[8] Deniz Erdogmus,et al. Structured Adversarial Attack: Towards General Implementation and Better Interpretability , 2018, ICLR.
[9] Tom Goldstein,et al. Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers , 2020, ArXiv.
[10] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[11] Junfeng Yang,et al. Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.
[12] Po-Sen Huang,et al. Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation , 2019, EMNLP/IJCNLP.
[13] Frank Hutter,et al. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets , 2017, ArXiv.
[14] Pushmeet Kohli,et al. Efficient Neural Network Verification with Exactness Characterization , 2019, UAI.
[15] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[16] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[17] Sijia Liu,et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.
[18] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[19] Cho-Jui Hsieh,et al. RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications , 2018, AAAI.
[20] Christian Tjandraatmadja,et al. The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification , 2020, NeurIPS.
[21] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[22] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Mislav Balunovic,et al. Certifying Geometric Robustness of Neural Networks , 2019, NeurIPS.
[24] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[25] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[26] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[27] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[28] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[29] Stephan Gunnemann,et al. Certifiable Robustness and Robust Training for Graph Convolutional Networks , 2019, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
[30] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[31] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[32] Minlie Huang,et al. Robustness Verification for Transformers , 2020, ICLR.
[33] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[34] Yoshua Bengio,et al. Finding Flatter Minima with SGD , 2018, ICLR.
[35] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[36] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[37] Yanjun Qi,et al. Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[38] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[39] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[40] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[41] Elad Hoffer,et al. Train longer, generalize better: closing the generalization gap in large batch training of neural networks , 2017, NIPS.
[42] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[43] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[44] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[45] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[46] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[47] Martin Vechev,et al. Beyond the Single Neuron Convex Barrier for Neural Network Certification , 2019, NeurIPS.
[48] Aditi Raghunathan,et al. Certified Robustness to Adversarial Word Substitutions , 2019, EMNLP.
[49] Mislav Balunovic,et al. Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.
[50] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[51] Ngai Wong,et al. POPQORN: Quantifying Robustness of Recurrent Neural Networks , 2019, ICML.
[52] J. Zico Kolter,et al. Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications , 2020, 2020 IEEE Intelligent Vehicles Symposium (IV).
[53] Dahua Lin,et al. Fastened CROWN: Tightened Neural Network Robustness Certificates , 2019, AAAI.
[54] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[55] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[56] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[57] Jean-Baptiste Jeannin,et al. Verifying Aircraft Collision Avoidance Neural Networks Through Linear Approximations of Safe Regions , 2019, ArXiv.
[58] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).