An approach to distributed machine learning is to train models on local datasets and aggregate these models into a single, stronger model. A popular instance of this form of parallelization is federated learning, where the nodes periodically send their local models to a coordinator that aggregates them and redistributes the aggregation back to continue training with it. The most frequently used form of aggregation is averaging the model parameters, e.g., the weights of a neural network. However, due to the non-convexity of the loss surface of neural networks, averaging can lead to detrimental effects and it remains an open question under which conditions averaging is beneficial. In this paper, we study this problem from the perspective of information theory: We measure the mutual information between representation and inputs as well as representation and labels in local models and compare it to the respective information contained in the representation of the averaged model. Our empirical results confirm previous observations about the practical usefulness of averaging for neural networks, even if local dataset distributions vary strongly. Furthermore, we obtain more insights about the impact of the aggregation frequency on the information flow and thus on the success of distributed learning. These insights will be helpful both in improving the current synchronization process and in further understanding the effects of model aggregation.
[1]
Matthias Hein,et al.
The Loss Surface of Deep and Wide Neural Networks
,
2017,
ICML.
[2]
Jorge Nocedal,et al.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
,
2016,
ICLR.
[3]
Yoshua Bengio,et al.
Gradient-based learning applied to document recognition
,
1998,
Proc. IEEE.
[4]
Alfred O. Hero,et al.
Scalable Mutual Information Estimation Using Dependence Graphs
,
2018,
ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[5]
Assaf Schuster,et al.
Communication-Efficient Distributed Online Prediction by Dynamic Model Synchronization
,
2014,
ECML/PKDD.
[6]
Naftali Tishby,et al.
Deep learning and the information bottleneck principle
,
2015,
2015 IEEE Information Theory Workshop (ITW).
[7]
David D. Cox,et al.
On the information bottleneck theory of deep learning
,
2018,
ICLR.
[8]
Naftali Tishby,et al.
Opening the Black Box of Deep Neural Networks via Information
,
2017,
ArXiv.
[9]
Blaise Agüera y Arcas,et al.
Communication-Efficient Learning of Deep Networks from Decentralized Data
,
2016,
AISTATS.
[10]
Stefan Wrobel,et al.
Efficient Decentralized Deep Learning by Dynamic Model Averaging
,
2018,
ECML/PKDD.