Resilient to Byzantine Attacks Finite-Sum Optimization Over Networks
暂无分享,去创建一个
Qing Ling | Georgios B. Giannakis | Tianyi Chen | Zhaoxian Wu | G. Giannakis | Zhaoxian Wu | Qing Ling | Tianyi Chen
[1] Zeyuan Allen-Zhu,et al. Katyusha: the first direct acceleration of stochastic gradient methods , 2016, J. Mach. Learn. Res..
[2] Indranil Gupta,et al. Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation , 2019, UAI.
[3] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[4] Stanislav Minsker. Geometric median and robust estimation in Banach spaces , 2013, 1308.1334.
[5] Tie-Yan Liu,et al. Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data , 2020, IEEE Transactions on Signal Processing.
[6] Waheed U. Bajwa,et al. Adversary-resilient Inference and Machine Learning: From Distributed to Decentralized , 2019, ArXiv.
[7] Martin J. Wainwright,et al. Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation , 2013, NIPS.
[8] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[9] Leslie Lamport,et al. The Byzantine Generals Problem , 1982, TOPL.
[10] Waheed U. Bajwa,et al. BRIDGE: Byzantine-Resilient Decentralized Gradient Descent , 2019, IEEE Transactions on Signal and Information Processing over Networks.
[11] Jie Liu,et al. SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient , 2017, ICML.
[12] Pascal Bianchi,et al. Robust Distributed Consensus Using Total Variation , 2016, IEEE Transactions on Automatic Control.
[13] Xiangru Lian,et al. D2: Decentralized Training over Decentralized Data , 2018, ICML.
[14] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[15] Zhiwei Xiong,et al. Byzantine-resilient Distributed Large-scale Matrix Completion , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[16] Pramod K. Varshney,et al. Distributed Inference with Byzantine Data: State-of-the-Art Review on Data Falsification Attacks , 2013, IEEE Signal Processing Magazine.
[17] Nicolas Le Roux,et al. Distributed SAGA: Maintaining linear convergence rate with limited communication , 2017, ArXiv.
[18] Soummya Kar,et al. The Internet of Things: Secure Distributed Inference , 2018, IEEE Signal Processing Magazine.
[19] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[20] Alexander J. Smola,et al. On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants , 2015, NIPS.
[21] Tom Goldstein,et al. Efficient Distributed SGD with Variance Reduction , 2015, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[22] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[23] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.
[24] Mark W. Schmidt,et al. Minimizing finite sums with the stochastic average gradient , 2013, Mathematical Programming.
[25] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[26] Lili Su,et al. Securing Distributed Machine Learning in High Dimensions , 2018, ArXiv.
[27] Yehuda Lindell,et al. Privacy Preserving Data Mining , 2002, Journal of Cryptology.
[28] Peter Richtárik,et al. Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.
[29] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.
[30] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.