Fully Decentralized Federated Learning

We consider the problem of training a machine learning model over a network of users in a fully decentralized framework. The users take a Bayesian-like approach via the introduction of a belief over the model parameter space. We propose a distributed learning algorithm in which users update their belief by aggregate information from their one-hop neighbors to learn a model that best fits the observations over the entire network. In addition, we also obtain sufficient conditions to ensure that the probability of error is small for every user in the network. Finally, we discuss approximations required for applying this algorithm for training Neural Networks.