Resource-Constrained On-Device Learning by Dynamic Averaging

The communication between data-generating devices is partially responsible for a growing portion of the world's power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount.

[1]  Tsutomu Maruyama,et al.  Performance comparison of FPGA, GPU and CPU in image processing , 2009, 2009 International Conference on Field Programmable Logic and Applications.

[2]  Michael I. Jordan,et al.  Graphical Models, Exponential Families, and Variational Inference , 2008, Found. Trends Mach. Learn..

[3]  Katharina Morik,et al.  Integer undirected graphical models for resource-constrained systems , 2016, Neurocomputing.

[4]  Nico Piatkowski Exponential families on resource-constrained systems , 2018 .

[5]  Nitinder Mohan,et al.  Edge-Fog cloud: A distributed cloud for Internet of Things computations , 2016, 2016 Cloudification of the Internet of Things (CIoT).

[6]  Nico Piatkowski Distributed Generative Modelling with Sub-linear Communication Overhead , 2019, PKDD/ECML Workshops.

[7]  Hanna Pihkola,et al.  Evaluating the Energy Consumption of Mobile Data Transfer—From Technology Development to Consumer Behaviour and Life Cycle Thinking , 2018, Sustainability.

[8]  Assaf Schuster,et al.  Communication-Efficient Distributed Online Prediction by Dynamic Model Synchronization , 2014, ECML/PKDD.

[9]  Ying-Chang Liang,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2020, IEEE Communications Surveys & Tutorials.

[10]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[11]  Stefan Wrobel,et al.  Efficient Decentralized Deep Learning by Dynamic Model Averaging , 2018, ECML/PKDD.

[12]  Thomas Gärtner,et al.  Effective Parallelisation for Machine Learning , 2017, NIPS.

[13]  Weisong Shi,et al.  Edge Computing: Vision and Challenges , 2016, IEEE Internet of Things Journal.

[14]  Michael Kamp,et al.  Communication-Efficient Distributed Online Learning with Kernels , 2016, ECML/PKDD.

[15]  C. N. Liu,et al.  Approximating discrete probability distributions with dependence trees , 1968, IEEE Trans. Inf. Theory.

[16]  Michael Kamp,et al.  Black-Box Parallelization for Machine Learning , 2020 .