Distributed Parameter Estimation in Randomized One-hidden-layer Neural Networks

This paper addresses distributed parameter estimation in randomized one-hidden-layer neural networks. A group of agents sequentially receive measurements of an unknown parameter that is only partially observable to them. In this paper, we present a fully distributed estimation algorithm where agents exchange local estimates with their neighbors to collectively identify the true value of the parameter. We prove that this distributed update provides an asymptotically unbiased estimator of the unknown parameter, i.e., the first moment of the expected global error converges to zero asymptotically. We further analyze the efficiency of the proposed estimation scheme by establishing an asymptotic upper bound on the variance of the global error. Applying our method to a real-world dataset related to appliances energy prediction, we observe that our empirical findings verify the theoretical results.

[1]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[2]  Benjamin Recht,et al.  Random Features for Large-Scale Kernel Machines , 2007, NIPS.

[3]  Chong Wang,et al.  Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin , 2015, ICML.

[4]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[5]  Andrew R. Barron,et al.  Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.

[6]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Christoph Meinel,et al.  Deep Learning for Medical Image Analysis , 2018, Journal of Pathology Informatics.

[8]  Jie Lin,et al.  Coordination of groups of mobile autonomous agents using nearest neighbor rules , 2003, IEEE Trans. Autom. Control..

[9]  Shahin Shahrampour,et al.  Online Learning of Dynamic Parameters in Social Networks , 2013, NIPS.

[10]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[11]  Paolo Favaro,et al.  Boosting Self-Supervised Learning via Knowledge Transfer , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[12]  Shreyas Sundaram,et al.  An approach for distributed state estimation of LTI systems , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[13]  AI Koan,et al.  Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning , 2008, NIPS.

[14]  Luis M. Candanedo,et al.  Data driven prediction models of energy use of appliances in a low-energy house , 2017 .

[15]  Soummya Kar,et al.  On connectivity, observability, and stability in distributed estimation , 2010, 49th IEEE Conference on Decision and Control (CDC).

[16]  Shahin Shahrampour,et al.  Distributed estimation of dynamic parameters: Regret analysis , 2016, 2016 American Control Conference (ACC).

[17]  George J. Pappas,et al.  Joint estimation and localization in sensor networks , 2014, 53rd IEEE Conference on Decision and Control.

[18]  Shahin Shahrampour,et al.  Exponentially fast parameter estimation in networks using distributed dual averaging , 2013, 52nd IEEE Conference on Decision and Control.

[19]  Soummya Kar,et al.  Distributed Parameter Estimation in Sensor Networks: Nonlinear Observation Models and Imperfect Communication , 2008, IEEE Transactions on Information Theory.

[20]  Srdjan S. Stankovic,et al.  Decentralized Parameter Estimation by Consensus Based Stochastic Approximation , 2007, IEEE Transactions on Automatic Control.

[21]  Sergey Levine,et al.  End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..