Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning
暂无分享,去创建一个
Yuan Tian | Ali Anwar | Feng Yan | Pin-Yu Chen | Nathalie Baracaldo | Ahsan Ali | Syed Zawad | Yi Zhou | Pin-Yu Chen | Nathalie Baracaldo | Yuan Tian | Yi Zhou | Ali Anwar | Ahsan Ali | Syed Zawad | Feng Yan | N. Baracaldo
[1] Jerry Li,et al. Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.
[2] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[3] Bing Chen,et al. Poisoning Attack in Federated Learning using Generative Adversarial Nets , 2019, 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE).
[4] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[5] Anit Kumar Sahu,et al. Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.
[6] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[7] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[8] Ivan Beschastnikh,et al. Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.
[9] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[10] Justin Hsu,et al. Data Poisoning against Differentially-Private Learners: Attacks and Defenses , 2019, IJCAI.
[11] Klaus-Robert Müller,et al. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[12] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[13] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[14] Benjamin Edwards,et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering , 2018, SafeAI@AAAI.
[15] Frank Lindseth,et al. DeepPrivacy: A Generative Adversarial Network for Face Anonymization , 2019, ISVC.
[16] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[17] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[18] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[19] Xiang Li,et al. On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.
[20] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[21] Heiko Ludwig,et al. TiFL: A Tier-based Federated Learning System , 2020, HPDC.
[22] Prateek Saxena,et al. Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.
[23] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[24] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[25] Tudor Dumitras,et al. On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping , 2020, ArXiv.
[26] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.