暂无分享,去创建一个
[1] Daniel Rueckert,et al. A generic framework for privacy preserving deep learning , 2018, ArXiv.
[2] Patrick D. McDaniel,et al. Machine Learning in Adversarial Settings , 2016, IEEE Security & Privacy.
[3] Nathan Srebro,et al. Minibatch vs Local SGD for Heterogeneous Distributed Learning , 2020, NeurIPS.
[4] Geoffrey E. Hinton,et al. Adaptive Mixtures of Local Experts , 1991, Neural Computation.
[5] Frederick R. Forst,et al. On robust estimation of the location parameter , 1980 .
[6] K. Johansson,et al. A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization , 2020, IEEE/CAA Journal of Automatica Sinica.
[7] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[8] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[9] Christian Pellegrini,et al. Local experts combination through density decomposition , 1999, AISTATS.
[10] Tian Li,et al. Fair Resource Allocation in Federated Learning , 2019, ICLR.
[11] Tianjian Chen,et al. Federated Machine Learning: Concept and Applications , 2019 .
[12] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[13] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[14] Samy Bengio,et al. A Parallel Mixture of SVMs for Very Large Scale Problems , 2001, Neural Computation.
[15] Gregory Cohen,et al. EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.
[16] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[17] Blaise Agüera y Arcas,et al. Federated Learning of Deep Networks using Model Averaging , 2016, ArXiv.
[18] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[19] Martin Jaggi,et al. A Unified Theory of Decentralized SGD with Changing Topology and Local Updates , 2020, ICML.
[20] Zaïd Harchaoui,et al. Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.