Hacking the AI - the Next Generation of Hijacked Systems
暂无分享,去创建一个
[1] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[2] Jeannette M. Wing,et al. An Attack Surface Metric , 2011, IEEE Transactions on Software Engineering.
[3] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Jun Pan,et al. Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks , 2020, Comput. Secur..
[5] Bo Luo,et al. I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators , 2018, ACSAC.
[6] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[7] Kouichi Sakurai,et al. Attacking convolutional neural network using differential evolution , 2018, IPSJ Transactions on Computer Vision and Applications.
[8] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[9] Zhenyu Zhang,et al. False Data Injection Attack Based on Hyperplane Migration of Support Vector Machine in Transmission Network of the Smart Grid , 2018, Symmetry.
[10] Nurali Virani,et al. Design of intentional backdoors in sequential models , 2019, ArXiv.
[11] Mario Fritz,et al. GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs , 2019, ArXiv.
[12] Tom White,et al. Generative Adversarial Networks: An Overview , 2017, IEEE Signal Processing Magazine.
[13] Patrick P. K. Chan,et al. Causative attack to Incremental Support Vector Machine , 2014, 2014 International Conference on Machine Learning and Cybernetics.
[14] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[15] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[16] Yu Ji,et al. Programmable Neural Network Trojan for Pre-Trained Feature Extractor , 2019, ArXiv.
[17] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[18] Nathan S. Netanyahu,et al. Stealing Knowledge from Protected Deep Neural Networks Using Composite Unlabeled Data , 2019, 2019 International Joint Conference on Neural Networks (IJCNN).
[19] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[20] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[21] Fabio Roli,et al. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination , 2017, ITASEC.
[22] Deliang Fan,et al. Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[23] Emiliano De Cristofaro,et al. LOGAN: Membership Inference Attacks Against Generative Models , 2017, Proc. Priv. Enhancing Technol..