Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
暂无分享,去创建一个
Klaus-Robert Müller | Wojciech Samek | Simon Wiedemann | Felix Sattler | K. Müller | W. Samek | Felix Sattler | Simon Wiedemann
[1] Klaus-Robert Müller,et al. Compact and Computationally Efficient Representation of Deep Neural Networks , 2018, IEEE Transactions on Neural Networks and Learning Systems.
[2] Fei-Fei Li,et al. Deep visual-semantic alignments for generating image descriptions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Martin Jaggi,et al. Sparsified SGD with Memory , 2018, NeurIPS.
[4] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[5] Takuya Akiba,et al. Variance-based Gradient Compression for Efficient Distributed Deep Learning , 2018, ICLR.
[6] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[7] Fei-Fei Li,et al. Large-Scale Video Classification with Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[8] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[9] Dan Alistarh,et al. QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks , 2016, 1610.02132.
[10] Richard Nock,et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption , 2017, ArXiv.
[11] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[12] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[13] Klaus-Robert Müller,et al. Entropy-Constrained Training of Deep Neural Networks , 2018, 2019 International Joint Conference on Neural Networks (IJCNN).
[14] Klaus-Robert Müller,et al. Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication , 2018, 2019 International Joint Conference on Neural Networks (IJCNN).
[15] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[16] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[17] Kenneth Heafield,et al. Sparse Communication for Distributed Gradient Descent , 2017, EMNLP.
[18] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Cong Xu,et al. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning , 2017, NIPS.
[20] Nikko Strom,et al. Scalable distributed DNN training using commodity GPU cloud computing , 2015, INTERSPEECH.
[21] Daniel Schmidt,et al. The world in 2025 - predictions for the next ten years , 2015, 2015 10th International Microsystems, Packaging, Assembly and Circuits Technology Conference (IMPACT).
[22] Sergey Ioffe,et al. Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models , 2017, NIPS.
[23] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[24] Solomon W. Golomb,et al. Run-length encodings (Corresp.) , 1966, IEEE Trans. Inf. Theory.
[25] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[26] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[28] William J. Dally,et al. Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training , 2017, ICLR.
[29] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[30] Dimitris S. Papailiopoulos,et al. ATOMO: Communication-efficient Learning via Atomic Sparsification , 2018, NeurIPS.
[31] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient And Byzantine Fault Tolerant , 2018, ArXiv.
[32] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[33] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.
[34] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[35] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[36] Kamyar Azizzadenesheli,et al. signSGD: compressed optimisation for non-convex problems , 2018, ICML.
[37] Sebastian Bosse,et al. Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment , 2016, IEEE Transactions on Image Processing.
[38] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.