Securing Distributed Gradient Descent in High Dimensional Statistical Learning
暂无分享,去创建一个
[1] B. Ripley,et al. Robust Statistics , 2018, Encyclopedia of Mathematical Geosciences.
[2] R. Adamczak,et al. Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles , 2009, 0903.2323.
[3] Martin J. Wainwright,et al. Communication-efficient algorithms for statistical optimization , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).
[4] Roman Vershynin,et al. Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.
[5] Shie Mannor,et al. Distributed Robust Learning , 2014, ArXiv.
[6] Jakub Konecný,et al. Federated Optimization: Distributed Optimization Beyond the Datacenter , 2015, ArXiv.
[7] Rachid Guerraoui,et al. Byzantine-Tolerant Machine Learning , 2017, ArXiv.
[8] Gregory Valiant,et al. Learning from untrusted data , 2016, STOC.
[9] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings , 2017, Proc. ACM Meas. Anal. Comput. Syst..
[10] Dan Alistarh,et al. Byzantine Stochastic Gradient Descent , 2018, NeurIPS.
[11] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[12] Gregory Valiant,et al. Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers , 2017, ITCS.
[13] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.