Defending Distributed Systems Against Adversarial Attacks: Consensus, Consensusbased Learning, and Statistical Learning
暂无分享,去创建一个
[1] LynchNancy,et al. Collaboratively Learning the Best Option on Graphs, Using Bounded Local Memory , 2019 .
[2] Lili Su,et al. Finite-Time Guarantees for Byzantine-Resilient Distributed State Estimation With Noisy Measurements , 2018, IEEE Transactions on Automatic Control.
[3] Andrzej Pelc,et al. Dissemination of Information in Communication Networks - Broadcasting, Gossiping, Leader Election, and Fault-Tolerance , 2005, Texts in Theoretical Computer Science. An EATCS Series.
[4] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[5] Nitin H. Vaidya,et al. Byzantine vector consensus in complete graphs , 2013, PODC '13.
[6] Nitin H. Vaidya,et al. Multi-agent optimization in the presence of Byzantine adversaries: Fundamental limits , 2016, 2016 American Control Conference (ACC).
[7] Rachid Guerraoui,et al. Byzantine-Tolerant Machine Learning , 2017, ArXiv.
[8] Nitin H. Vaidya,et al. Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms , 2016, PODC.
[9] Maurice Herlihy,et al. Multidimensional approximate agreement in Byzantine asynchronous systems , 2013, STOC '13.
[10] Kannan Ramchandran,et al. Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning , 2018, ICML.
[11] Shreyas Sundaram,et al. Distributed Function Calculation via Linear Iterative Strategies in the Presence of Malicious Agents , 2011, IEEE Transactions on Automatic Control.
[12] Nitin H. Vaidya,et al. Byzantine-Resilient Multiagent Optimization , 2021, IEEE Transactions on Automatic Control.
[13] Nancy A. Lynch,et al. Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits , 2019, Neural Computation.
[14] Lili Su,et al. Securing Distributed Gradient Descent in High Dimensional Statistical Learning , 2018, Proc. ACM Meas. Anal. Comput. Syst..
[15] Alexander V. Nazin,et al. Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method , 2019, Automation and Remote Control.
[16] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[17] Lili Su,et al. On Learning Over-parameterized Neural Networks: A Functional Approximation Prospective , 2019, NeurIPS.
[18] Indranil Gupta,et al. Generalized Byzantine-tolerant SGD , 2018, ArXiv.
[19] Nancy A. Lynch,et al. Ant-Inspired Dynamic Task Allocation via Gossiping , 2017, SSS.
[20] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings , 2017, Proc. ACM Meas. Anal. Comput. Syst..
[21] Leslie Lamport,et al. The Byzantine Generals Problem , 1982, TOPL.
[22] Dan Alistarh,et al. Byzantine Stochastic Gradient Descent , 2018, NeurIPS.
[23] Ali Jadbabaie,et al. Non-Bayesian Social Learning , 2011, Games Econ. Behav..
[24] Nitin H. Vaidya,et al. Non-Bayesian Learning in the Presence of Byzantine Agents , 2016, DISC.
[25] Nancy A. Lynch,et al. Collaboratively Learning the Best Option on Graphs, Using Bounded Local Memory , 2019, SIGMETRICS.
[26] Nancy A. Lynch,et al. Collaboratively Learning the Best Option on Graphs, Using Bounded Local Memory , 2018, Proc. ACM Meas. Anal. Comput. Syst..
[27] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.