暂无分享,去创建一个
Tommi S. Jaakkola | Guang-He Lee | David Alvarez-Melis | T. Jaakkola | David Alvarez-Melis | Guang-He Lee
[1] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[2] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[3] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[4] Matthias Hein,et al. Provable Robustness of ReLU networks via Maximization of Linear Regions , 2018, AISTATS.
[5] Christian Tjandraatmadja,et al. Bounding and Counting Linear Regions of Deep Neural Networks , 2017, ICML.
[6] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[7] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[8] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[9] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[10] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[11] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[12] Darrell Whitley,et al. A genetic algorithm tutorial , 1994, Statistics and Computing.
[13] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[14] Qiang Ye,et al. Orthogonal Recurrent Neural Networks with Scaled Cayley Transform , 2017, ICML.
[15] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[16] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[17] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[18] Changshui Zhang,et al. Deep Defense: Training DNNs with Improved Adversarial Robustness , 2018, NeurIPS.
[19] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[20] Yoshua Bengio,et al. Unitary Evolution Recurrent Neural Networks , 2015, ICML.
[21] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Sanjiv Kumar,et al. On the Convergence of Adam and Beyond , 2018 .
[23] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[24] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[25] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.
[26] Hossein Mobahi,et al. Large Margin Deep Networks for Classification , 2018, NeurIPS.
[27] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[28] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[29] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[30] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[31] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[32] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[33] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[34] V. Vapnik. Estimation of Dependences Based on Empirical Data , 2006 .
[35] Matteo Fischetti,et al. Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study , 2017, ArXiv.
[36] Yoshua Bengio,et al. A Generative Process for sampling Contractive Auto-Encoders , 2012, ICML 2012.
[37] Ayhan Demiriz,et al. Semi-Supervised Support Vector Machines , 1998, NIPS.
[38] Richard Baraniuk,et al. Mad Max: Affine Spline Insights Into Deep Learning , 2018, Proceedings of the IEEE.
[39] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[40] Surya Ganguli,et al. On the Expressive Power of Deep Neural Networks , 2016, ICML.
[41] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[43] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[44] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[45] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[46] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[47] Dilin Wang,et al. Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning , 2016, ArXiv.
[48] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[49] G. Griffin,et al. Caltech-256 Object Category Dataset , 2007 .