Byzantine Machine Learning: A Primer

The problem of Byzantine resilience in distributed machine learning, a.k.a., Byzantine machine learning, consists in designing distributed algorithms that can train an accurate model despite the presence of Byzantine nodes, i.e., nodes with corrupt data or machines that can misbehave arbitrarily. By now, many solutions to this important problem have been proposed, most of which build upon the classical stochastic gradient descent (SGD) scheme. Yet, the literature lacks a unified structure of this emerging field. Consequently, the general understanding on the principles of Byzantine machine learning remains poor. This paper addresses this issue by presenting a primer on Byzantine machine learning. In particular, we introduce three pillars of Byzantine machine learning, namely the concepts of breakdown point, robustness and gradient complexity, to curate the efficacy of a solution. The introduced systematization enables us to (i) bring forth the merits and limitations of the state-of-the-art solutions, and (ii) pave a clear path for future advancements in this field.

[1]  R. Guerraoui,et al.  On the Privacy-Robustness-Utility Trilemma in Distributed Learning , 2023, ICML.

[2]  R. Guerraoui,et al.  Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity , 2023, AISTATS.

[3]  Jinyuan Jia,et al.  FLCert: Provably Secure Federated Learning Against Poisoning Attacks , 2022, IEEE Transactions on Information Forensics and Security.

[4]  R. Guerraoui,et al.  Robust Collaborative Learning with Linear Gradient Overhead , 2022, ICML.

[5]  Moayad Aloqaily,et al.  CRACAU: Byzantine Machine Learning Meets Industrial Edge Computing in Industry 5.0 , 2022, IEEE Transactions on Industrial Informatics.

[6]  Hui Huang,et al.  Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning , 2022, IEEE Communications Letters.

[7]  M. Chowdhury,et al.  Cybersecurity Threats and Their Mitigation Approaches Using Machine Learning - A Review , 2022, J. Cybersecur. Priv..

[8]  Thijs Vogels,et al.  Beyond spectral gap: The role of the topology in decentralized learning , 2022, NeurIPS.

[9]  Eduard A. Gorbunov,et al.  Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top , 2022, ICLR.

[10]  Michael I. Jordan,et al.  Byzantine-Robust Federated Learning with Optimal Statistical Rates , 2022, AISTATS.

[11]  R. Guerraoui,et al.  Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums , 2022, ICML.

[12]  Djamila Bouhata,et al.  Byzantine Fault Tolerance in Distributed Machine Learning : a Survey , 2022, ArXiv.

[13]  Qing Ling,et al.  Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation , 2022, IJCAI.

[14]  Marijn Hoijtink,et al.  Machine Learning and the Platformization of the Military: A Study of Google's Machine Learning Platform TensorFlow , 2022, International Political Sociology.

[15]  Flávia Coimbra Delicato,et al.  A Systematic Literature Review on Distributed Machine Learning in Edge Computing , 2022, Sensors.

[16]  R. Guerraoui,et al.  An Equivalence Between Data Poisoning and Byzantine Gradient Attacks , 2022, ICML.

[17]  Bingsheng He,et al.  A Survey on Spark Ecosystem: Big Data Processing Infrastructure, Machine Learning, and Applications , 2022, IEEE Transactions on Knowledge and Data Engineering.

[18]  Shengshan Hu,et al.  Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning , 2021, 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom).

[19]  Aymeric Dieuleveut,et al.  Differentially Private Federated Learning on Heterogeneous Data , 2021, AISTATS.

[20]  Bryan M. Wong,et al.  Machine Learning: New Ideas and Tools in Environmental Science and Engineering. , 2021, Environmental science & technology.

[21]  Bernard W. Bell Replacing Bureaucrats with Automated Sorcerers? , 2021, Daedalus.

[22]  Mukul S. Sutaone,et al.  A review on weight initialization strategies for neural networks , 2021, Artificial Intelligence Review.

[23]  Eduard A. Gorbunov,et al.  Secure Distributed Training at Scale , 2021, ICML.

[24]  Ufuk Topcu,et al.  Robust Training in High Dimensions via Block Coordinate Geometric Median Descent , 2021, AISTATS.

[25]  Shuo Liu,et al.  A Survey on Fault-tolerance in Distributed Optimization and Machine Learning , 2021, ArXiv.

[26]  Rachid Guerraoui,et al.  GARFIELD: System Support for Byzantine Machine Learning (Regular Paper) , 2021, 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).

[27]  Nitin H. Vaidya,et al.  Byzantine Fault-Tolerant Distributed Machine Learning with Norm-Based Comparative Gradient Elimination , 2021, 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W).

[28]  Nitin H. Vaidya,et al.  Byzantine Fault-Tolerance in Decentralized Optimization under 2f-Redundancy , 2021, 2021 American Control Conference (ACC).

[29]  Nitin H. Vaidya,et al.  Byzantine-Resilient Multiagent Optimization , 2021, IEEE Transactions on Automatic Control.

[30]  Ji Liu,et al.  From distributed machine learning to federated learning: a survey , 2021, Knowledge and Information Systems.

[31]  Walid Saad,et al.  Distributed Learning in Wireless Networks: Recent Progress and Future Challenges , 2021, IEEE Journal on Selected Areas in Communications.

[32]  Shengshan Hu,et al.  Shielding Federated Learning: A New Attack Approach and Its Defense , 2021, 2021 IEEE Wireless Communications and Networking Conference (WCNC).

[33]  Yann Chevaleyre,et al.  On the robustness of randomized classifiers to adversarial examples , 2021, Machine Learning.

[34]  R. Guerraoui,et al.  Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? , 2021, PODC.

[35]  Eduard A. Gorbunov,et al.  MARINA: Faster Non-Convex Distributed Learning with Compression , 2021, ICML.

[36]  George J. Pappas,et al.  Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients , 2021, NeurIPS.

[37]  Yann Chevaleyre,et al.  Mixed Nash Equilibria in the Adversarial Examples Game , 2021, ICML.

[38]  Nitin H. Vaidya,et al.  Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent , 2021, ArXiv.

[39]  Nitin H. Vaidya,et al.  Approximate Byzantine Fault-Tolerance in Distributed Optimization , 2021, PODC.

[40]  Dan Alistarh,et al.  Byzantine-Resilient Non-Convex Stochastic Gradient Descent , 2020, ICLR.

[41]  Xiaoyu Cao,et al.  FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping , 2020, NDSS.

[42]  Martin Jaggi,et al.  Learning from History for Byzantine Robust Optimization , 2020, ICML.

[43]  Andreas Holzinger,et al.  Recommender systems in the healthcare domain: state-of-the-art and research issues , 2020, Journal of Intelligent Information Systems.

[44]  Gianluca Misuraca,et al.  Exploratory Insights on Artificial Intelligence for Government in Europe , 2020, Social Science Computer Review.

[45]  S. Hu,et al.  Distributed Machine Learning for Wireless Communication Networks: Techniques, Architectures, and Applications , 2020, IEEE Communications Surveys & Tutorials.

[46]  Lewis Tseng,et al.  Echo-CGC: A Communication-Efficient Byzantine-tolerant Distributed Machine Learning Algorithm in Single-Hop Radio Network , 2020, OPODIS.

[47]  Xiangnan He,et al.  A Survey on Large-Scale Machine Learning , 2020, IEEE Transactions on Knowledge and Data Engineering.

[48]  R. Guerraoui,et al.  Collaborative Learning in the Jungle (Decentralized, Byzantine, Heterogeneous, Asynchronous and Nonconvex Learning) , 2020, NeurIPS.

[49]  Nitin H. Vaidya,et al.  Fault-Tolerance in Distributed Optimization: The Case of Redundancy , 2020, PODC.

[50]  Jayanth Reddy Regatti,et al.  ByGARS: Byzantine SGD with Arbitrary Number of Attackers. , 2020 .

[51]  Hamid Reza Feyzmahdavian,et al.  Advances in Asynchronous Parallel and Distributed Optimization , 2020, Proceedings of the IEEE.

[52]  Sai Praneeth Karimireddy,et al.  Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing , 2020, ICLR.

[53]  Rafael Pinot,et al.  SPEED: secure, PrivatE, and efficient deep learning , 2020, Machine Learning.

[54]  S Preetha,et al.  Machine Learning for Handwriting Recognition , 2020 .

[55]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[56]  Shaohuai Shi,et al.  A Quantitative Survey of Communication Optimizations in Distributed Deep Learning , 2020, IEEE Network.

[57]  Rui Hu,et al.  Personalized Federated Learning With Differential Privacy , 2020, IEEE Internet of Things Journal.

[58]  Martin Jaggi,et al.  A Unified Theory of Decentralized SGD with Changing Topology and Local Updates , 2020, ICML.

[59]  Nitin H. Vaidya,et al.  Resilience in Collaborative Optimization: Redundant and Independent Cost Functions , 2020, ArXiv.

[60]  Shreyas Sundaram,et al.  Byzantine-Resilient Distributed Optimization of Multi-Dimensional Functions , 2020, 2020 American Control Conference (ACC).

[61]  Yann Chevaleyre,et al.  Randomization matters. How to defend against strong adversarial attacks , 2020, ICML.

[62]  Jun Zhang,et al.  Communication-Efficient Edge AI: Algorithms and Systems , 2020, IEEE Communications Surveys & Tutorials.

[63]  Florian Tramèr,et al.  On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.

[64]  Junaid Qadir,et al.  Secure and Robust Machine Learning for Healthcare: A Survey , 2020, IEEE Reviews in Biomedical Engineering.

[65]  Tanuj Kanchan,et al.  Facial-recognition algorithms: A literature review , 2020, Medicine, science, and the law.

[66]  Zaïd Harchaoui,et al.  Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.

[67]  G. Giannakis,et al.  Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks , 2019, IEEE Transactions on Signal Processing.

[68]  Tim Verbelen,et al.  A Survey on Distributed Machine Learning , 2019, ACM Comput. Surv..

[69]  Nitin H. Vaidya,et al.  Randomized Reactive Redundancy for Byzantine Fault-Tolerance in Parallelized Learning , 2019, ArXiv.

[70]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..

[71]  Di Cao,et al.  Understanding Distributed Poisoning Attack in Federated Learning , 2019, 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS).

[72]  Paulo Tabuada,et al.  When is the Secure State-Reconstruction Problem Hard? , 2019, 2019 IEEE 58th Conference on Decision and Control (CDC).

[73]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[74]  Kannan Ramchandran,et al.  Communication-Efficient and Byzantine-Robust Distributed Learning With Error Feedback , 2019, IEEE Journal on Selected Areas in Information Theory.

[75]  Yang Liu,et al.  Abnormal Client Behavior Detection in Federated Learning , 2019, ArXiv.

[76]  Xin Yao,et al.  Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating , 2019, ArXiv.

[77]  Sashank J. Reddi,et al.  SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.

[78]  Lifeng Lai,et al.  Distributed Gradient Descent Algorithm Robust to an Arbitrary Number of Byzantine Attackers , 2019, IEEE Transactions on Signal Processing.

[79]  Shuo Yang,et al.  Distributed Machine Learning on Mobile Devices: A Survey , 2019, ArXiv.

[80]  Kenneth T. Co,et al.  Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging , 2019, ArXiv.

[81]  Nitin H. Vaidya,et al.  Byzantine Fault-Tolerant Parallelized Stochastic Gradient Descent for Linear Regression , 2019, 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[82]  Waheed U. Bajwa,et al.  Adversary-Resilient Distributed and Decentralized Statistical Inference and Machine Learning: An Overview of Recent Advances Under the Byzantine Threat Model , 2019, IEEE Signal Processing Magazine.

[83]  W. Bajwa,et al.  BRIDGE: Byzantine-Resilient Decentralized Gradient Descent , 2019, IEEE Transactions on Signal and Information Processing over Networks.

[84]  Qun Li,et al.  FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks , 2019, IJCAI.

[85]  Hongyi Wang,et al.  DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation , 2019, NeurIPS.

[86]  Deepesh Data,et al.  Data Encoding for Byzantine-Resilient Distributed Optimization , 2019, IEEE Transactions on Information Theory.

[87]  Anand Handa,et al.  Machine learning in cybersecurity: A review , 2019, WIREs Data Mining Knowl. Discov..

[88]  Saeed Mahloujifar,et al.  Data Poisoning Attacks in Multi-Party Learning , 2019, ICML.

[89]  Rong Jin,et al.  On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization , 2019, ICML.

[90]  R. Guerraoui,et al.  Genuinely distributed Byzantine machine learning , 2019, Distributed Computing.

[91]  O. Koyejo,et al.  Zeno++: Robust Fully Asynchronous SGD , 2019, ICML.

[92]  Indranil Gupta,et al.  SLSGD: Secure and Efficient Distributed On-device Machine Learning , 2019, ECML/PKDD.

[93]  Xiaoqiang Ren,et al.  Secure State Estimation With Byzantine Sensors: A Probabilistic Approach , 2019, IEEE Transactions on Automatic Control.

[94]  Indranil Gupta,et al.  Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation , 2019, UAI.

[95]  K. Maddulety,et al.  Machine Learning in Banking Risk Management: A Literature Review , 2019, Risks.

[96]  Moran Baruch,et al.  A Little Is Enough: Circumventing Defenses For Distributed Learning , 2019, NeurIPS.

[97]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[98]  Hisashi Kashima,et al.  Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family , 2019, NeurIPS.

[99]  Farzam Fanitabasi,et al.  A Review of Adversarial Behaviour in Distributed Multi-Agent Optimisation , 2018, 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion).

[100]  Yuxing Peng,et al.  A Quick Survey on Large Scale Distributed Deep Learning Systems , 2018, 2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS).

[101]  Paulo Tabuada,et al.  Toward an Internet of Battlefield Things: A Resilience Perspective , 2018, Computer.

[102]  Jianyu Wang,et al.  Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD , 2018, MLSys.

[103]  Lili Su,et al.  Finite-Time Guarantees for Byzantine-Resilient Distributed State Estimation With Noisy Measurements , 2018, IEEE Transactions on Automatic Control.

[104]  Kamyar Azizzadenesheli,et al.  signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.

[105]  Jianping Li,et al.  Application of Convolutional Neural Network in Natural Language Processing , 2018, 2018 15th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP).

[106]  Kannan Ramchandran,et al.  Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning , 2018, ICML.

[107]  Indranil Gupta,et al.  Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance , 2018, ICML.

[108]  M. A. Jabbar,et al.  Machine Learning in Healthcare: A Review , 2018, 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA).

[109]  Dimitris S. Papailiopoulos,et al.  DRACO: Byzantine-resilient Distributed Training via Redundant Gradients , 2018, ICML.

[110]  Dan Alistarh,et al.  The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory , 2018, PODC.

[111]  Dan Alistarh,et al.  Byzantine Stochastic Gradient Descent , 2018, NeurIPS.

[112]  Jerry Li,et al.  Sever: A Robust Meta-Algorithm for Stochastic Optimization , 2018, ICML.

[113]  Kannan Ramchandran,et al.  Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.

[114]  Shreyas Sundaram,et al.  Byzantine-resilient distributed observers for LTI systems , 2018, Autom..

[115]  Indranil Gupta,et al.  Generalized Byzantine-tolerant SGD , 2018, ArXiv.

[116]  Torsten Hoefler,et al.  Demystifying Parallel and Distributed Deep Learning , 2018, ACM Comput. Surv..

[117]  Rachid Guerraoui,et al.  The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.

[118]  Sivaraman Balakrishnan,et al.  Robust estimation via robust gradient estimation , 2018, Journal of the Royal Statistical Society: Series B (Statistical Methodology).

[119]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[120]  Zongpeng Li,et al.  Online Job Scheduling in Distributed Machine Learning Clusters , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[121]  Praveen Kumar Donepudi AI and Machine Learning in Banking: A Systematic Literature Review , 2017 .

[122]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[123]  Waheed Uz Zaman Bajwa,et al.  ByRDiE: Byzantine-Resilient Distributed Coordinate Descent for Decentralized Learning , 2017, IEEE Transactions on Signal and Information Processing over Networks.

[124]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[125]  Wei Zhang,et al.  Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.

[126]  Lili Su,et al.  Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.

[127]  Lili Su,et al.  Distributed Statistical Machine Learning in Adversarial Settings , 2017, Proc. ACM Meas. Anal. Comput. Syst..

[128]  Vijay Kumar,et al.  Resilient consensus for time-varying networks of dynamic agents , 2017, 2017 American Control Conference (ACC).

[129]  Gregory Valiant,et al.  Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers , 2017, ITCS.

[130]  Insup Lee,et al.  Attack-Resilient State Estimation for Noisy Dynamical Systems , 2017, IEEE Transactions on Control of Network Systems.

[131]  Shie Mannor,et al.  Outlier Robust Online Learning , 2017, ArXiv.

[132]  Daniel M. Kane,et al.  Statistical Query Lower Bounds for Robust Estimation of High-Dimensional Gaussians and Gaussian Mixtures , 2016, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS).

[133]  Gregory Valiant,et al.  Learning from untrusted data , 2016, STOC.

[134]  Nitin H. Vaidya,et al.  Robust Multi-agent Optimization: Coping with Byzantine Agents with Input Redundancy , 2016, SSS.

[135]  Nitin H. Vaidya,et al.  Non-Bayesian Learning in the Presence of Byzantine Agents , 2016, DISC.

[136]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[137]  Nitin H. Vaidya,et al.  Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms , 2016, PODC.

[138]  Nitin H. Vaidya,et al.  Multi-agent optimization in the presence of Byzantine adversaries: Fundamental limits , 2016, 2016 American Control Conference (ACC).

[139]  B. Gharesifard,et al.  Distributed Optimization Under Adversarial Nodes , 2016, IEEE Transactions on Automatic Control.

[140]  Jakub W. Pachocki,et al.  Geometric median in nearly linear time , 2016, STOC.

[141]  Jorge Nocedal,et al.  Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..

[142]  Gernot Rieder,et al.  Datatrust: Or, the political quest for numerical evidence and the epistemologies of Big Data , 2016 .

[143]  Santosh S. Vempala,et al.  Agnostic Estimation of Mean and Covariance , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).

[144]  Daniel M. Kane,et al.  Robust Estimators in High Dimensions without the Computational Intractability , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).

[145]  Martín Abadi,et al.  TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.

[146]  Paulo Tabuada,et al.  Secure State Estimation Against Sensor Attacks in the Presence of Noise , 2015, IEEE Transactions on Control of Network Systems.

[147]  Yoav Goldberg,et al.  A Primer on Neural Network Models for Natural Language Processing , 2015, J. Artif. Intell. Res..

[148]  Shreyas Sundaram,et al.  Consensus-based distributed optimization with malicious nodes , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[149]  Lewis Tseng,et al.  Fault-Tolerant Consensus in Directed Graphs , 2015, PODC.

[150]  João Pedro Hespanha,et al.  Observability of linear systems under adversarial attacks , 2015, 2015 American Control Conference (ACC).

[151]  Yijun Huang,et al.  Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization , 2015, NIPS.

[152]  Prateek Jain,et al.  Robust Regression via Hard Thresholding , 2015, NIPS.

[153]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[154]  Paulo Tabuada,et al.  Secure State Estimation for Cyber-Physical Systems Under Sensor Attacks: A Satisfiability Modulo Theory Approach , 2014, IEEE Transactions on Automatic Control.

[155]  Alexander J. Smola,et al.  Communication Efficient Distributed Machine Learning with the Parameter Server , 2014, NIPS.

[156]  Alexander J. Smola,et al.  Scaling Distributed Machine Learning with the Parameter Server , 2014, OSDI.

[157]  Shie Mannor,et al.  Distributed Robust Learning , 2014, ArXiv.

[158]  Thomas Hofmann,et al.  Communication-Efficient Distributed Dual Coordinate Ascent , 2014, NIPS.

[159]  Francis Bach,et al.  SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.

[160]  Hwee Pink Tan,et al.  Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications , 2014, IEEE Communications Surveys & Tutorials.

[161]  Paulo Tabuada,et al.  Robustness of attack-resilient state estimators , 2014, 2014 ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS).

[162]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[163]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[164]  Saeed Ghadimi,et al.  Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..

[165]  Nitin H. Vaidya,et al.  Iterative Byzantine Vector Consensus in Incomplete Graphs , 2013, ICDCN.

[166]  Shreyas Sundaram,et al.  Resilient Asymptotic Consensus in Robust Networks , 2013, IEEE Journal on Selected Areas in Communications.

[167]  B. Guijarro-Berdiñas,et al.  A survey of methods for distributed machine learning , 2013, Progress in Artificial Intelligence.

[168]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[169]  Lewis Tseng,et al.  Iterative Approximate Byzantine Consensus under a Generalized Fault Model , 2012, ICDCN.

[170]  Shreyas Sundaram,et al.  Robustness of complex networks with implications for consensus and contagion , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[171]  Nitin H. Vaidya,et al.  Matrix Representation of Iterative Approximate Byzantine Consensus in Directed Graphs , 2012, ArXiv.

[172]  Lewis Tseng,et al.  Iterative approximate byzantine consensus in arbitrary directed graphs , 2012, PODC '12.

[173]  Shreyas Sundaram,et al.  Robustness of information diffusion algorithms to locally bounded adversaries , 2011, 2012 American Control Conference (ACC).

[174]  John C. Duchi,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011 .

[175]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[176]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[177]  Rachid Guerraoui,et al.  Introduction to Reliable and Secure Distributed Programming , 2011 .

[178]  Martin J. Wainwright,et al.  Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling , 2010, IEEE Transactions on Automatic Control.

[179]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[180]  Peter J. Rousseeuw,et al.  Robust Regression and Outlier Detection , 2005, Wiley Series in Probability and Statistics.

[181]  Piotr Indyk,et al.  Approximate clustering via core-sets , 2002, STOC '02.

[182]  P. Parrilo,et al.  Minimizing Polynomial Functions , 2001, Algorithmic and Quantitative Aspects of Real Algebraic Geometry in Mathematics and Computer Science.

[183]  Marcin Paprzycki,et al.  Distributed Computing: Fundamentals, Simulations and Advanced Topics , 2001, Scalable Comput. Pract. Exp..

[184]  Léon Bottou,et al.  On-line learning and stochastic approximations , 1999 .

[185]  C. Small A Survey of Multidimensional Medians , 1990 .

[186]  Leslie G. Valiant,et al.  Random Generation of Combinatorial Structures from a Uniform Distribution , 1986, Theor. Comput. Sci..

[187]  Nancy A. Lynch,et al.  Impossibility of distributed consensus with one faulty process , 1985, JACM.

[188]  J. T. Sims,et al.  The Byzantine Generals Problem , 1982, TOPL.

[189]  Manuel Blum,et al.  Linear time bounds for median computations , 1972, STOC.

[190]  Franklin M. Fisher,et al.  A Note on Estimation from a Cauchy Sample , 1964 .

[191]  J. Haldane Note on the median of a multivariate distribution , 1948 .

[192]  Bin Wang,et al.  AWFC: Preventing Label Flipping Attacks Towards Federated Learning for Intelligent IoT , 2022, Computer/law journal.

[193]  Sai Praneeth Karimireddy,et al.  Byzantine-Robust Decentralized Learning via Self-Centered Clipping , 2022, ArXiv.

[194]  Zheli Liu,et al.  Differentially Private Byzantine-robust Federated Learning , 2022, IEEE Transactions on Parallel and Distributed Systems.

[195]  Sébastien Rouault Practical Byzantine-resilient Stochastic Gradient Descent , 2022 .

[196]  Arsany Guirguis System Support for Robust Distributed Learning , 2022 .

[197]  Rachid Guerraoui,et al.  Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent , 2021, ICLR.

[198]  Yulu Pi Machine learning in Governments: Benefits, Challenges and Future Directions , 2021, JeDEM - eJournal of eDemocracy and Open Government.

[199]  Amir Houmansadr,et al.  Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning , 2021, NDSS.

[200]  Yang Liu,et al.  BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning , 2020, USENIX ATC.

[201]  El Mahdi El Mhamdi,et al.  Robust Distributed Learning , 2020 .

[202]  Alexandre Maurer,et al.  AKSEL: Fast Byzantine SGD , 2020, OPODIS.

[203]  Ivan Beschastnikh,et al.  The Limitations of Federated Learning in Sybil Settings , 2020, RAID.

[204]  Ziyang Meng,et al.  A survey of distributed optimization , 2019, Annu. Rev. Control..

[205]  Aymeric Dieuleveut,et al.  Communication trade-offs for Local-SGD with large step size , 2019, NeurIPS.

[206]  Supriya Agrawal,et al.  Speech to text and text to speech recognition systems-Areview , 2018 .

[207]  A. Sangiovanni-Vincentelli,et al.  I MHOTEP-SMT : A Satisfiability Modulo Theory Solver For Secure State Estimation ∗ , 2015 .

[208]  Aaron Q. Li,et al.  Parameter Server for Distributed Machine Learning , 2013 .

[209]  John N. Tsitsiklis,et al.  Parallel and distributed computation , 1989 .

[210]  P. Rousseeuw Multivariate estimation with high breakdown point , 1985 .

[211]  Boris Polyak Some methods of speeding up the convergence of iteration methods , 1964 .

[212]  U. Zubairu,et al.  Systematic Literature Review: An overview of Digital Agriculture for Food Sustainability , 2022, IJEBD (International Journal of Entrepreneurship and Business Development).

[213]  Rachid Guerraoui,et al.  Asynchronous Byzantine Machine Learning ( the case of SGD ) Supplementary Material , 2022 .