Fully Decentralized and Federated Low Rank Compressive Sensing

In this work we develop a fully decentralized, federated, and fast solution to the recently studied Low Rank Compressive Sensing (LRCS) problem: recover an n× q lowrank matrix X? = [x1,x ? 2, . . . ,x ? q] from column-wise linear projections, yk := Akxk , k = 1,2, . . . ,q, when each yk is an mlength vector with m < n. An important application where this problem occurs and a decentralized solution is desirable is in federated sketching: efficiently compressing the vast amounts of distributed images/videos generated by smartphones and various other devices while respecting the users’ privacy. Images from different devices, once grouped by category, are pretty similar and hence the matrix formed by the vectorized images of a certain category is well-modeled as being low rank. A simple federated sketching solution is to left multiply the kth vectorized image by a random matrix Ak and to store only yk. When m n, this requires much lesser storage than storing the full image, and is much faster to implement than traditional image compression. Suppose there are p nodes (say p smartphones), and each stores a set of (q/p) sketches of its images. We develop a decentralized projected gradient descent (GD) based approach to jointly reconstruct the images of all the phones/users from their respective stored sketches. The algorithm is such that the phones/users never share their raw data (their subset of yks) but only summaries of this data with the other phones at each algorithm iteration. Also, the reconstructed images of user g are obtained only locally. Other users cannot reconstruct them. Only the column span of the matrix X? is reconstructed globally. By “decentralized” we mean that there is no central node to which all nodes are connected and thus the only way to aggregate the summaries from the various nodes is by use of an iterative consensus algorithm that eventually provides an estimate of the aggregate at each node, as long as the network is strongly connected. We validated the effectiveness of our algorithm via extensive simulation experiments.

[1]  George Cybenko,et al.  Dynamic Load Balancing for Distributed Memory Multiprocessors , 1989, J. Parallel Distributed Comput..

[2]  Stephen P. Boyd,et al.  Distributed average consensus with least-mean-square deviation , 2007, J. Parallel Distributed Comput..

[3]  Richard M. Murray,et al.  DISTRIBUTED SENSOR FUSION USING DYNAMIC CONSENSUS , 2005 .

[4]  Shannon M. Hughes,et al.  Memory and Computation Efficient PCA via Very Sparse Random Projections , 2014, ICML.

[5]  Namrata Vaswani,et al.  Fast and Sample-Efficient Federated Low Rank Matrix Recovery From Column-Wise Linear and Quadratic Projections , 2021, IEEE Transactions on Information Theory.

[6]  Namrata Vaswani,et al.  Sample-Efficient Low Rank Phase Retrieval , 2021, 2021 IEEE International Symposium on Information Theory (ISIT).

[7]  Seif Haridi,et al.  Distributed Algorithms , 1992, Lecture Notes in Computer Science.

[8]  John N. Tsitsiklis,et al.  Convergence Speed in Distributed Consensus and Averaging , 2009, SIAM J. Control. Optim..

[9]  Pablo A. Parrilo,et al.  Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..

[10]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[11]  Martin Jaggi,et al.  Consensus Control for Decentralized Deep Learning , 2021, ICML.

[12]  Andrea J. Goldsmith,et al.  Exact and Stable Covariance Estimation From Quadratic Sampling via Convex Programming , 2013, IEEE Transactions on Information Theory.

[13]  Anit Kumar Sahu,et al.  Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.

[14]  Namrata Vaswani,et al.  Provable Low Rank Phase Retrieval , 2020, IEEE Transactions on Information Theory.

[15]  Mehdi Bennis,et al.  Opportunities of Federated Learning in Connected, Cooperative, and Automated Industrial Systems , 2021, IEEE Communications Magazine.

[16]  Christoforos N. Hadjicostis,et al.  Distributed strategies for average consensus in directed graphs , 2011, IEEE Conference on Decision and Control and European Control Conference.

[17]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[18]  Martin J. Wainwright,et al.  Estimation of (near) low-rank matrices with noise and high-dimensional scaling , 2009, ICML.

[19]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[20]  Jie Lin,et al.  Coordination of groups of mobile autonomous agents using nearest neighbor rules , 2003, IEEE Trans. Autom. Control..

[21]  Stephen P. Boyd,et al.  A scheme for robust distributed sensor fusion based on average consensus , 2005, IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005..

[22]  Justin Romberg,et al.  Decentralized sketching of low rank matrices , 2019, NeurIPS.

[23]  Prateek Jain,et al.  Low-rank matrix completion using alternating minimization , 2012, STOC '13.

[24]  Richard M. Murray,et al.  Consensus problems in networks of agents with switching topology and time-delays , 2004, IEEE Transactions on Automatic Control.

[25]  Yonina C. Eldar,et al.  Low-Rank Phase Retrieval , 2016, IEEE Transactions on Signal Processing.

[26]  Hubert Eichner,et al.  Towards Federated Learning at Scale: System Design , 2019, SysML.