暂无分享,去创建一个
[1] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[2] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[3] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[4] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[5] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[6] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..
[7] Bilal Farooq,et al. Ensemble Convolutional Neural Networks for Mode Inference in Smartphone Travel Survey , 2019, IEEE Transactions on Intelligent Transportation Systems.
[8] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[9] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.
[10] Ivan Beschastnikh,et al. Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.
[11] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[12] Siddharth Garg,et al. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.
[13] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[14] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.
[15] Shouling Ji,et al. Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation Agent , 2020, USENIX Security Symposium.
[16] Jinyuan Jia,et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.
[17] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.