Threats to Federated Learning
暂无分享,去创建一个
Lingjuan Lyu | Qiang Yang | Han Yu | Jun Zhao | Qiang Yang | Han Yu | L. Lyu | Jun Zhao
[1] Lingjuan Lyu,et al. Collaborative Fairness in Federated Learning , 2020, Federated Learning.
[2] Lingjuan Lyu,et al. How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning , 2020, IEEE Transactions on Dependable and Secure Computing.
[3] Lingjuan Lyu,et al. FORESEEN: Towards Differentially Private Deep Inference for Intelligent Internet of Things , 2020, IEEE Journal on Selected Areas in Communications.
[4] Mi Zhang,et al. Privacy Risks of General-Purpose Language Models , 2020, 2020 IEEE Symposium on Security and Privacy (SP).
[5] Jun Zhao,et al. Local Differential Privacy-Based Federated Learning for Internet of Things , 2020, IEEE Internet of Things Journal.
[6] Tianjian Chen,et al. FedVision: An Online Visual Object Detection Platform Powered by Federated Learning , 2020, AAAI.
[7] Bo Zhao,et al. iDLG: Improved Deep Leakage from Gradients , 2020, ArXiv.
[8] Amir Houmansadr,et al. Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer , 2019, ArXiv.
[9] Yang Liu,et al. Federated Learning , 2019, Synthesis Lectures on Artificial Intelligence and Machine Learning.
[10] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[11] Han Yu,et al. Privacy-preserving Heterogeneous Federated Transfer Learning , 2019, 2019 IEEE International Conference on Big Data (Big Data).
[12] Anit Kumar Sahu,et al. Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.
[13] K. S. Ng,et al. Towards Fair and Privacy-Preserving Federated Deep Models , 2019, IEEE Transactions on Parallel and Distributed Systems.
[14] Song Han,et al. Deep Leakage from Gradients , 2019, NeurIPS.
[15] Lingjuan Lyu,et al. Fog-Embedded Deep Learning for the Internet of Things , 2019, IEEE Transactions on Industrial Informatics.
[16] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[17] Qiang Yang,et al. Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..
[18] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.
[19] Amir Houmansadr,et al. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[20] Gaurav Kapoor,et al. Protection Against Reconstruction and Its Applications in Private Federated Learning , 2018, ArXiv.
[21] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[22] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.
[23] Ivan Beschastnikh,et al. Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.
[24] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[25] Sanjiv Kumar,et al. cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.
[26] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[27] Shiho Moriai,et al. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.
[28] Lili Su,et al. Securing Distributed Machine Learning in High Dimensions , 2018, ArXiv.
[29] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[30] Dimitris S. Papailiopoulos,et al. DRACO: Byzantine-resilient Distributed Training via Redundant Gradients , 2018, ICML.
[31] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[32] Mianxiong Dong,et al. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing , 2018, IEEE Network.
[33] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[34] Richard Nock,et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption , 2017, ArXiv.
[35] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[36] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[37] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[38] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[39] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[40] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[41] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[42] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[43] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[44] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[45] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[46] J. D. Tygar. Adversarial Machine Learning , 2011, IEEE Internet Comput..
[47] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[48] Chris Clifton,et al. Privacy-preserving distributed mining of association rules on horizontally partitioned data , 2004, IEEE Transactions on Knowledge and Data Engineering.
[49] Jaideep Vaidya,et al. Privacy preserving association rule mining in vertically partitioned data , 2002, KDD.