暂无分享,去创建一个
[1] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[2] Parham Aarabi,et al. Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization , 2018, 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).
[3] Yi Li,et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks , 2016, NIPS.
[4] Tom Goldstein,et al. Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates , 2020, ICLR.
[5] Michael I. Jordan,et al. HopSkipJumpAttack: A Query-Efficient Decision-Based Attack , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[6] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[8] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[9] David Jacobs,et al. Adversarially robust transfer learning , 2020, ICLR.
[10] Jinoh Kim,et al. A survey of deep learning-based network anomaly detection , 2017, Cluster Computing.
[11] Nancy Wilkins-Diehr,et al. XSEDE: Accelerating Scientific Discovery , 2014, Computing in Science & Engineering.
[12] Ronald M. Summers,et al. Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique , 2016 .
[13] Paul Rad,et al. Chameleon: A Scalable Production Testbed for Computer Science Research , 2019, Contemporary High Performance Computing.
[14] Liang Tong,et al. Defending Against Physically Realizable Attacks on Image Classification , 2020, ICLR.
[15] Sergey Levine,et al. Adversarial Policies: Attacking Deep Reinforcement Learning , 2019, ICLR.
[16] Nikolaos Doulamis,et al. Deep Learning for Computer Vision: A Brief Review , 2018, Comput. Intell. Neurosci..
[17] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[18] Veton Kepuska,et al. Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home) , 2018, 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC).
[19] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[20] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[21] Cho-Jui Hsieh,et al. Robust Decision Trees Against Adversarial Examples , 2019 .
[22] Yee Whye Teh,et al. A Statistical Approach to Assessing Neural Network Robustness , 2018, ICLR.
[23] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[24] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[25] Logan Engstrom,et al. Query-Efficient Black-box Adversarial Examples , 2017, ArXiv.
[26] Yann LeCun,et al. A Closer Look at Spatiotemporal Convolutions for Action Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] Iasonas Kokkinos,et al. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[28] Chen-Kuo Chiang,et al. Generating Adversarial Examples By Makeup Attacks on Face Recognition , 2019, 2019 IEEE International Conference on Image Processing (ICIP).
[29] Paul Rad,et al. Distributed machine learning cloud teleophthalmology IoT for predicting AMD disease progression , 2019, Future Gener. Comput. Syst..
[30] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[31] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[32] Li Chen,et al. SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression , 2018, KDD.
[33] Francesco Croce,et al. Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$ , 2019, ICLR.
[34] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[35] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[36] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[37] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[38] Dina Katabi,et al. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation , 2019, ICML.
[39] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[41] Paul Rad,et al. A Distributed Secure Machine-Learning Cloud Architecture for Semantic Analysis , 2018 .
[42] Somesh Jha,et al. Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning , 2017, ACSAC.
[43] Balaraman Ravindran,et al. EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks , 2020, ICLR.
[44] Richa Singh,et al. Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks , 2018, AAAI.
[45] Tao Wei,et al. Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking , 2020, ICLR.
[46] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[47] Anthony Yezzi,et al. An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense , 2019, ArXiv.
[48] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[49] Heng-Tze Cheng,et al. Wide & Deep Learning for Recommender Systems , 2016, DLRS@RecSys.
[50] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[51] Prateek Mittal,et al. DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.
[52] Hunter Gabbard,et al. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy. , 2017, Physical review letters.
[53] Mehrad Jaloli,et al. Implicit Life Event Discovery From Call Transcripts Using Temporal Input Transformation Network , 2019, IEEE Access.
[54] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[55] Jinfeng Yi,et al. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.
[56] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[57] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[58] Guillermo Sapiro,et al. Robust Large Margin Deep Neural Networks , 2016, IEEE Transactions on Signal Processing.
[59] Roberto Cipolla,et al. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[60] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[61] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[62] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[63] Tom Rainforth,et al. Statistically Robust Neural Network Classification , 2019, ArXiv.
[64] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[65] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[66] Paul Rad,et al. Cooperative unmanned aerial vehicles with privacy preserving deep vision for real-time object identification and tracking , 2019, J. Parallel Distributed Comput..
[67] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[68] Cordelia Schmid,et al. Long-Term Temporal Convolutions for Action Recognition , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[69] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[70] Jiliang Tang,et al. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.
[71] Chong Wang,et al. Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin , 2015, ICML.
[72] Rama Chellappa,et al. UPSET and ANGRI : Breaking High Performance Image Classifiers , 2017, ArXiv.
[73] Jia Deng,et al. Stacked Hourglass Networks for Human Pose Estimation , 2016, ECCV.
[74] Lap-Pui Chau,et al. Improved Network Robustness with Adversary Critic , 2018, NeurIPS.
[75] Changshui Zhang,et al. Deep Defense: Training DNNs with Improved Adversarial Robustness , 2018, NeurIPS.
[76] David L. Dill,et al. Ground-Truth Adversarial Examples , 2017, ArXiv.
[77] Silvio Savarese,et al. Learning to Track at 100 FPS with Deep Regression Networks , 2016, ECCV.
[78] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[79] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[80] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[81] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[82] Pradeep Ravikumar,et al. MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius , 2020, ICLR.
[83] Paul Rad,et al. Implementation of deep packet inspection in smart grids and industrial Internet of Things: Challenges and opportunities , 2019, J. Netw. Comput. Appl..
[84] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[85] Sijia Liu,et al. CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks , 2018, AAAI.
[86] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[87] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[88] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[89] Jonathan Krause,et al. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States , 2017, Proceedings of the National Academy of Sciences.
[90] Miltos Petridis,et al. Seen the villains: Detecting Social Engineering Attacks using Case-based Reasoning and Deep Learning , 2019, ICCBR Workshops.
[91] Kevin Fu,et al. Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving , 2019, CCS.
[92] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[93] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[94] Nir Morgulis,et al. Fooling a Real Car with Adversarial Traffic Signs , 2019, ArXiv.
[95] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[96] Arno Blaas,et al. BayesOpt Adversarial Attack , 2020, ICLR.
[97] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[98] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[99] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[100] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[101] Rengan Suresh,et al. A Multi-Layer K-means Approach for Multi-Sensor Data Pattern Recognition in Multi-Target Localization , 2017, ArXiv.
[102] Cho-Jui Hsieh,et al. Towards Robust Neural Networks via Random Self-ensemble , 2017, ECCV.
[103] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[104] Paul Rad,et al. Driverless vehicle security: Challenges and future research opportunities , 2020, Future Gener. Comput. Syst..
[105] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[106] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[107] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[108] Alina Oprea,et al. Adversarial Examples for Deep Learning Cyber Security Analytics , 2019, ArXiv.
[109] Edward Raff,et al. Barrage of Random Transforms for Adversarially Robust Defense , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[110] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[111] Yaser Sheikh,et al. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[112] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[113] Gavin Brown,et al. Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).
[114] Baishakhi Ray,et al. Metric Learning for Adversarial Robustness , 2019, NeurIPS.
[115] Wei Liu,et al. Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[116] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[117] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[118] Sergey Levine,et al. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[119] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[120] Yang Song,et al. Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.
[121] Scott E. Coull,et al. Exploring Adversarial Examples in Malware Detection , 2018, 2019 IEEE Security and Privacy Workshops (SPW).
[122] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[123] Kun He,et al. Robust Local Features for Improving the Generalization of Adversarial Training , 2020, ICLR.
[124] Thomas Brox,et al. DeepTAM: Deep Tracking and Mapping , 2018, ECCV.
[125] Bram van Ginneken,et al. A survey on deep learning in medical image analysis , 2017, Medical Image Anal..
[126] John J. Prevost,et al. Human Action Performance Using Deep Neuro-Fuzzy Recurrent Attention Model , 2020, IEEE Access.
[127] Chao Ren,et al. BiRen: predicting enhancers with a deep‐learning‐based model using the DNA sequence alone , 2017, Bioinform..
[128] Paul Rad,et al. Deep Learning Poison Data Attack Detection , 2019, 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI).
[129] Arun Das,et al. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey , 2020, ArXiv.
[130] Samuel Henrique Silva,et al. Temporal Graph Traversals Using Reinforcement Learning With Proximal Policy Optimization , 2020, IEEE Access.
[131] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[132] Tianlong Chen,et al. Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference , 2020, ICLR.
[133] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[134] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[135] Ian T. Foster,et al. Jetstream: a self-provisioned, scalable science and engineering cloud environment , 2015, XSEDE.
[136] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[137] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.