暂无分享,去创建一个
[1] Donald R. Jones,et al. Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..
[2] R. Dudley. The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian Processes , 1967 .
[3] Yisong Yue,et al. Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes , 2018, AAAI.
[4] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[5] Yarin Gal,et al. Dropout Inference in Bayesian Neural Networks with Alpha-divergences , 2017, ICML.
[6] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[7] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[8] Michael Backes,et al. How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models , 2017, ArXiv.
[9] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[10] Sanjit A. Seshia,et al. Compositional Falsification of Cyber-Physical Systems with Machine Learning Components , 2017, NFM.
[11] Matthijs C. Dorst. Distinctive Image Features from Scale-Invariant Keypoints , 2011 .
[12] Neil D. Lawrence,et al. Fast Forward Selection to Speed Up Sparse Gaussian Process Regression , 2003, AISTATS.
[13] Alkis Gotovos,et al. Safe Exploration for Optimization with Gaussian Processes , 2015, ICML.
[14] Francis R. Bach,et al. Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning , 2008, NIPS.
[15] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[16] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[17] Danna Zhou,et al. d. , 1934, Microbial pathogenesis.
[18] Luca Cardelli,et al. Reachability Computation for Switching Diffusions: Finite Abstractions with Certifiable and Tuneable Precision , 2017, HSCC.
[19] Matthew Wicker,et al. Feature-Guided Black-Box Safety Testing of Deep Neural Networks , 2017, TACAS.
[20] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[21] Sanjit A. Seshia,et al. Towards Verified Artificial Intelligence , 2016, ArXiv.
[22] R. Adler,et al. Random Fields and Geometry , 2007 .
[23] Xiaowei Huang,et al. Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.
[24] Stefan Zohren,et al. Gradient descent in Gaussian random fields as a toy model for high-dimensional optimisation in deep learning , 2018, ArXiv.
[25] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[26] Tsuyoshi Murata,et al. {m , 1934, ACML.
[28] Richard E. Turner,et al. Gaussian Process Behaviour in Wide Deep Neural Networks , 2018, ICLR.
[29] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[30] G LoweDavid,et al. Distinctive Image Features from Scale-Invariant Keypoints , 2004 .
[31] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[32] Ashish Kapoor,et al. Safe Control under Uncertainty , 2015, ArXiv.
[33] Luca Cardelli,et al. Central Limit Model Checking , 2018, ACM Trans. Comput. Log..
[34] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[35] Jaehoon Lee,et al. Deep Neural Networks as Gaussian Processes , 2017, ICLR.
[36] Ezio Bartocci,et al. System design of stochastic models using robustness of temporal properties , 2015, Theor. Comput. Sci..