暂无分享,去创建一个
Peter Kairouz | Amir Houmansadr | Daniel Ramage | Virat Shejwalkar | D. Ramage | P. Kairouz | Virat Shejwalkar | A. Houmansadr | Daniel Ramage
[1] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[2] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[3] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[4] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[5] T. Minka. Estimating a Dirichlet distribution , 2012 .
[6] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[7] Cristina Nita-Rotaru,et al. On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis , 2014, AISec '14.
[8] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[9] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[10] Santosh S. Vempala,et al. Agnostic Estimation of Mean and Covariance , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).
[11] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[12] Daniel M. Kane,et al. Robust Estimators in High Dimensions without the Computational Intractability , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).
[13] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[14] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[15] Jerry Li,et al. Being Robust (in High Dimensions) Can Be Practical , 2017, ICML.
[16] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[17] Gregory Cohen,et al. EMNIST: Extending MNIST to handwritten letters , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).
[18] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[19] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[20] Dan Alistarh,et al. Byzantine Stochastic Gradient Descent , 2018, NeurIPS.
[21] Dimitris S. Papailiopoulos,et al. DRACO: Byzantine-resilient Distributed Training via Redundant Gradients , 2018, ICML.
[22] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[23] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.
[24] Indranil Gupta,et al. Phocas: dimensional Byzantine-resilient stochastic gradient descent , 2018, ArXiv.
[25] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[26] Indranil Gupta,et al. Generalized Byzantine-tolerant SGD , 2018, ArXiv.
[27] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[28] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[29] Indranil Gupta,et al. Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance , 2018, ICML.
[30] Waheed Uz Zaman Bajwa,et al. ByRDiE: Byzantine-Resilient Distributed Coordinate Descent for Decentralized Learning , 2017, IEEE Transactions on Signal and Information Processing over Networks.
[31] Luis Muñoz-González,et al. Poisoning Attacks with Generative Adversarial Nets , 2019, ArXiv.
[32] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[33] Rachid Guerraoui,et al. SGD: Decentralized Byzantine Resilience , 2019, ArXiv.
[34] Amir Houmansadr,et al. Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer , 2019, ArXiv.
[35] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[36] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[37] Bo Li,et al. Attack-Resistant Federated Learning with Residual-based Reweighting , 2019, ArXiv.
[38] Nitin H. Vaidya,et al. Randomized Reactive Redundancy for Byzantine Fault-Tolerance in Parallelized Learning , 2019, ArXiv.
[39] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.
[40] Moran Baruch,et al. A Little Is Enough: Circumventing Defenses For Distributed Learning , 2019, NeurIPS.
[41] Hongyi Wang,et al. DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation , 2019, NeurIPS.
[42] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.
[43] Xiangyu Zhang,et al. ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation , 2019, CCS.
[44] Jerry Li,et al. Sever: A Robust Meta-Algorithm for Stochastic Optimization , 2018, ICML.
[45] Rachid Guerraoui,et al. AGGREGATHOR: Byzantine Machine Learning via Robust Gradient Aggregation , 2019, SysML.
[46] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[47] Sebastian U. Stich,et al. Ensemble Distillation for Robust Model Fusion in Federated Learning , 2020, NeurIPS.
[48] Jinyuan Jia,et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.
[49] Kartik Sreenivasan,et al. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.
[50] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[51] Vitaly Shmatikov,et al. Salvaging Federated Learning by Local Adaptation , 2020, ArXiv.
[52] Jaekyun Moon,et al. Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks , 2019, NeurIPS.
[53] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[54] A. Ramamoorthy,et al. ByzShield: An Efficient and Robust System for Distributed Training , 2020, MLSys.
[55] Ivan Beschastnikh,et al. The Limitations of Federated Learning in Sybil Settings , 2020, RAID.
[56] Mehmet Emre Gursoy,et al. Data Poisoning Attacks Against Federated Learning Systems , 2020, ESORICS.
[57] Heiko Ludwig,et al. IBM Federated Learning: an Enterprise Framework White Paper V0.1 , 2020, ArXiv.
[58] P. Mitra,et al. Mitigating Backdoor Attacks in Federated Learning , 2020, ArXiv.
[59] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[60] Virginia Smith,et al. Ditto: Fair and Robust Federated Learning Through Personalization , 2020, ICML.
[61] Manzil Zaheer,et al. Adaptive Federated Optimization , 2020, ICLR.
[62] Rogier C. van Dalen,et al. Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications , 2021, ArXiv.
[63] Suhas Diggavi,et al. Data Encoding for Byzantine-Resilient Distributed Optimization , 2021, IEEE Transactions on Information Theory.
[64] Amir Houmansadr,et al. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning , 2021, NDSS.
[65] Xiaoyu Cao,et al. Provably Secure Federated Learning against Malicious Clients , 2021, AAAI.
[66] Minghao Chen,et al. CRFL: Certifiably Robust Federated Learning against Backdoor Attacks , 2021, ICML.
[67] Suhas Diggavi,et al. Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data , 2020, 2021 IEEE International Symposium on Information Theory (ISIT).
[68] Zaïd Harchaoui,et al. Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.