暂无分享,去创建一个
Chris Hankin | Sergio Maffeis | Mathieu Sinn | Ambrish Rawat | Giulio Zizzo | C. Hankin | S. Maffeis | Ambrish Rawat | M. Sinn | Giulio Zizzo
[1] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[2] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[3] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[4] Indranil Gupta,et al. Generalized Byzantine-tolerant SGD , 2018, ArXiv.
[5] Parijat Dube,et al. Adversarial training in communication constrained federated learning , 2021, ArXiv.
[6] Moran Baruch,et al. A Little Is Enough: Circumventing Defenses For Distributed Learning , 2019, NeurIPS.
[7] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[8] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[9] Minghao Chen,et al. CRFL: Certifiably Robust Federated Learning against Backdoor Attacks , 2021, ICML.
[10] Indranil Gupta,et al. Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance , 2018, ICML.
[11] Jiayu Zhou,et al. Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning , 2021, ArXiv.
[12] Ce Zhang,et al. RAB: Provable Robustness Against Backdoor Attacks , 2020, ArXiv.
[13] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.
[14] Xiaoyu Cao,et al. Provably Secure Federated Learning against Malicious Clients , 2021, AAAI.
[15] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[16] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..
[17] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[18] Mislav Balunovic,et al. Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.
[19] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[20] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[21] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[22] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[23] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[24] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[25] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.