PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network
暂无分享,去创建一个
Di Wu | Shui Yu | Ying Zhao | Jiale Zhang | Junjun Chen | Jian Teng | Ying Zhao | Junjun Chen | Jiale Zhang | Di Wu | Shui Yu | Jian Teng
[1] Shiho Moriai,et al. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.
[2] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[3] Philip S. Yu,et al. Multi-View Fusion with Extreme Learning Machine for Clustering , 2019, ACM Trans. Intell. Syst. Technol..
[4] Vitaly Shmatikov,et al. Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[5] Qiang Yang,et al. Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..
[6] Jia Liu,et al. Poisoning Attacks to Graph-Based Recommender Systems , 2018, ACSAC.
[7] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[8] Lina Yao,et al. Adversarially Regularized Graph Autoencoder , 2018, IJCAI.
[9] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[10] Payman Mohassel,et al. SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).
[11] Prateek Saxena,et al. Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.
[12] Shui Yu,et al. Big Privacy: Challenges and Opportunities of Privacy Study in the Age of Big Data , 2016, IEEE Access.
[13] Yang Song,et al. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning , 2018, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.
[14] Heiko Ludwig,et al. Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach , 2017, AISec@CCS.
[15] Hassan Takabi,et al. Privacy-preserving Machine Learning as a Service , 2018, Proc. Priv. Enhancing Technol..
[16] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[17] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[18] Ivan Beschastnikh,et al. Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.
[19] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[20] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[21] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[22] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[23] Shih-Fu Chang,et al. Learning Spread-Out Local Feature Descriptors , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[24] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[25] Miriam A. M. Capretz,et al. MLaaS: Machine Learning as a Service , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).
[26] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.