暂无分享,去创建一个
Ilya P. Razenshteyn | Eric Price | Ilya Razenshteyn | S'ebastien Bubeck | Sébastien Bubeck | Eric Price
[1] Vijay V. Vazirani,et al. Trapdoor pseudo-random number generators, with applications to protocol design , 1983, 24th Annual Symposium on Foundations of Computer Science (sfcs 1983).
[2] Manuel Blum,et al. A Simple Unpredictable Pseudo-Random Number Generator , 1986, SIAM J. Comput..
[3] Michael Kearns,et al. Efficient noise-tolerant learning from statistical queries , 1993, STOC.
[4] Yishay Mansour,et al. Weakly learning DNF and characterizing statistical query learning using Fourier analysis , 1994, STOC '94.
[5] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[6] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[7] Amir Globerson,et al. Nightmare at test time: robust learning by feature deletion , 2006, ICML.
[8] Alexander A. Sherstov,et al. Unconditional lower bounds for learning intersections of halfspaces , 2007, Machine Learning.
[9] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[10] Le Song,et al. On the Complexity of Learning Neural Networks , 2017, NIPS.
[11] Santosh S. Vempala,et al. Statistical Algorithms and a Lower Bound for Detecting Planted Cliques , 2012, J. ACM.
[12] Amparo Gil,et al. Asymptotic Approximations to the Nodes and Weights of Gauss–Hermite and Gauss–Laguerre Quadratures , 2017, 1709.09656.
[13] Daniel M. Kane,et al. Statistical Query Lower Bounds for Robust Estimation of High-Dimensional Gaussians and Gaussian Mixtures , 2016, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS).
[14] Vitaly Feldman,et al. A General Characterization of the Statistical Query Complexity , 2016, COLT.
[15] Vatsal Sharan,et al. A Spectral View of Adversarially Robust Features , 2018, NeurIPS.
[16] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[17] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[18] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[19] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[20] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[21] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[22] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[23] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[24] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Computer Vision and Time-Limited Humans , 2018, NeurIPS.
[25] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Human and Computer Vision , 2018, ArXiv.
[26] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[27] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[28] Vinod Vaikuntanathan,et al. Computational Limitations in Robust Classification and Win-Win Results , 2019, IACR Cryptol. ePrint Arch..
[29] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[30] Inderjit S. Dhillon,et al. The Limitations of Adversarial Training and the Blind-Spot Attack , 2019, ICLR.