暂无分享,去创建一个
[1] Edward Chou,et al. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems , 2020, 2020 IEEE Security and Privacy Workshops (SPW).
[2] Tom Goldstein,et al. Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors , 2020, ECCV.
[3] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[4] Dan Boneh,et al. SentiNet: Detecting Physical Attacks Against Deep Learning Systems , 2018, ArXiv.
[5] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[6] Mauricio Barahona,et al. Titration of chaos with added noise , 2001, Proceedings of the National Academy of Sciences of the United States of America.
[7] E Weinan,et al. A Proposal on Machine Learning via Dynamical Systems , 2017, Communications in Mathematics and Statistics.
[8] W. Siebert. Circuits, Signals and Systems , 1985 .
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Junmo Kim,et al. Deep Pyramidal Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Omri Azencot,et al. Lipschitz Recurrent Neural Networks , 2020, ArXiv.
[12] Michael I. Jordan,et al. A Dynamical Systems Perspective on Nesterov Acceleration , 2019, ICML.
[13] Alejandro Francisco Queiruga,et al. Studying Shallow and Deep Convolutional Neural Networks as Learned Numerical Schemes on the 1D Heat Equation and Burgers' Equation , 2019, 1909.08142.
[14] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[15] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[16] Damith Chinthana Ranasinghe,et al. STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.
[17] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[18] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[19] Bin Dong,et al. Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations , 2017, ICML.
[20] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[21] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[22] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[23] Ben Y. Zhao,et al. Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks , 2019, ArXiv.
[24] Aleksander Madry,et al. Clean-Label Backdoor Attacks , 2018 .
[25] Hamed Pirsiavash,et al. Hidden Trigger Backdoor Attacks , 2019, AAAI.
[26] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[27] Minhui Xue,et al. Invisible Backdoor Attacks Against Deep Neural Networks , 2019, ArXiv.
[28] Minhui Xue,et al. Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization , 2019 .
[29] M. Rosenstein,et al. A practical method for calculating largest Lyapunov exponents from small data sets , 1993 .
[30] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[31] Tengyu Ma,et al. Gradient Descent Learns Linear Dynamical Systems , 2016, J. Mach. Learn. Res..
[32] Rafael de la Llave,et al. A Tutorial on Kam Theory , 2003 .
[33] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[34] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[35] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[36] Nikita Borisov,et al. Detecting AI Trojans Using Meta Neural Analysis , 2019, 2021 IEEE Symposium on Security and Privacy (SP).
[37] Harris Drucker,et al. Comparison of learning algorithms for handwritten digit recognition , 1995 .
[38] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[39] Ankur Srivastava,et al. Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).
[40] Sencun Zhu,et al. Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation , 2018, CODASPY.
[41] Michael W. Mahoney,et al. Physics-informed Autoencoders for Lyapunov-stable Fluid Flow Prediction , 2019, ArXiv.
[42] Omri Azencot,et al. Forecasting Sequential Data using Consistent Koopman Autoencoders , 2020, ICML.
[43] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[44] Jishen Zhao,et al. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks , 2019, IJCAI.
[45] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[46] David Duvenaud,et al. Neural Ordinary Differential Equations , 2018, NeurIPS.
[47] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[48] Sharad Goel,et al. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.
[49] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.