Wireless MapReduce Distributed Computing

Motivated by mobile edge computing and wireless data centers, we study a wireless distributed computing framework where the distributed nodes exchange information over a wireless interference network. Our framework follows the structure of MapReduce. This framework consists of Map, Shuffle, and Reduce phases, where Map and Reduce are computation phases and Shuffle is a data transmission phase. In our setting, we assume that the transmission is operated over a wireless interference network. We demonstrate that, by duplicating the computation work at a cluster of distributed nodes in the Map phase, one can reduce the amount of transmission load required for the Shuffle phase. In this work, we characterize the fundamental tradeoff between computation load and communication load, under the assumption of one-shot linear schemes. The proposed scheme is based on side information cancellation and zero-forcing, and we prove that it is optimal in terms of computation-communication tradeoff. The proposed scheme outperforms the naive TDMA scheme with single node transmission at a time, as well as the coded TDMA scheme that allows coding across data, in terms of the computation-communication tradeoff.

[1]  Suhas N. Diggavi,et al.  Degrees of Freedom of Cache-Aided Wireless Interference Networks , 2016, IEEE Transactions on Information Theory.

[2]  Alexandros G. Dimakis,et al.  Gradient Coding: Avoiding Stragglers in Distributed Learning , 2017, ICML.

[3]  Fan Li,et al.  Distributed Computing with Heterogeneous Communication Constraints: The Worst-Case Computation Load and Proof by Contradiction , 2018, ArXiv.

[4]  Meixia Tao,et al.  Fundamental Tradeoff Between Storage and Latency in Cache-Aided Wireless Interference Networks , 2016, IEEE Transactions on Information Theory.

[5]  Osvaldo Simeone,et al.  Fog-Aided Wireless Networks for Content Delivery: Fundamental Latency Tradeoffs , 2016, IEEE Transactions on Information Theory.

[6]  Ravi Tandon,et al.  Combating Computational Heterogeneity in Large-Scale Distributed Computing via Work Exchange , 2017, ArXiv.

[7]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[8]  Jaekyun Moon,et al.  Hierarchical Coding for Distributed Computing , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[9]  VardhanHars,et al.  60GHz wireless links in data center networks , 2014 .

[10]  Ramtin Pedarsani,et al.  Latency analysis of coded computation schemes over wireless networks , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[11]  Uri Erez,et al.  Achieving 1/2 log (1+SNR) on the AWGN channel with lattice encoding and decoding , 2004, IEEE Transactions on Information Theory.

[12]  Pulkit Grover,et al.  “Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products , 2017, IEEE Transactions on Information Theory.

[13]  Petros Elia,et al.  Coded Distributed Computing with Node Cooperation Substantially Increases Speedup Factors , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[14]  Mohammad Ali Maddah-Ali,et al.  Coded MapReduce , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[15]  Mohammad Ali Maddah-Ali,et al.  Communication-aware computing for edge processing , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[16]  Mohammad Ali Maddah-Ali,et al.  How to optimally allocate resources for coded distributed computing? , 2017, 2017 IEEE International Conference on Communications (ICC).

[17]  A. Salman Avestimehr,et al.  A Scalable Framework for Wireless Distributed Computing , 2016, IEEE/ACM Transactions on Networking.

[18]  Mohammad Ali Maddah-Ali,et al.  Fundamental limits of cache-aided interference management , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[19]  Urs Niesen,et al.  Cache-aided interference channels , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[20]  Kannan Ramchandran,et al.  Speeding Up Distributed Machine Learning Using Codes , 2015, IEEE Transactions on Information Theory.

[21]  Fan Li,et al.  On Distributed Computing with Heterogeneous Communication Constraints , 2018, 2018 52nd Asilomar Conference on Signals, Systems, and Computers.

[22]  Sanjiv Kumar,et al.  cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.

[23]  Ravi Tandon,et al.  Information Theoretic Limits of Data Shuffling for Distributed Learning , 2016, 2016 IEEE Global Communications Conference (GLOBECOM).

[24]  Amir Salman Avestimehr,et al.  Coded computation over heterogeneous clusters , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[25]  Zhiying Wang,et al.  Wireless MapReduce Distributed Computing , 2019, IEEE Transactions on Information Theory.

[26]  Suhas N. Diggavi,et al.  Encoded distributed optimization , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[27]  Christina Fragouli,et al.  A pliable index coding approach to data shuffling , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[28]  Mohammad Ali Maddah-Ali,et al.  Coded Distributed Computing: Straggling Servers and Multistage Dataflows , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[29]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[30]  Christina Fragouli,et al.  Communication vs distributed computation: An alternative trade-off curve , 2017, 2017 IEEE Information Theory Workshop (ITW).

[31]  Ravi Tandon,et al.  On the worst-case communication overhead for distributed data shuffling , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[32]  Rong-Rong Chen,et al.  A New Combinatorial Design of Coded Distributed Computing , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[33]  Kannan Ramchandran,et al.  High-dimensional coded matrix multiplication , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[34]  Sheng Yang,et al.  A Storage-Computation-Communication Tradeoff for Distributed Computing , 2018, 2018 15th International Symposium on Wireless Communication Systems (ISWCS).

[35]  Giuseppe Caire,et al.  Fundamental Limits of Caching in Wireless D2D Networks , 2014, IEEE Transactions on Information Theory.

[36]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[37]  Pulkit Grover,et al.  Coded convolution for parallel and distributed computing within a deadline , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[38]  Zhi-Quan Luo,et al.  On the Degrees of Freedom Achievable Through Interference Alignment in a MIMO Interference Channel , 2011, IEEE Transactions on Signal Processing.

[39]  Thomas M. Cover,et al.  Elements of information theory (2. ed.) , 2006 .

[40]  Dezun Dong,et al.  FlyCast: Free-Space Optics Accelerating Multicast Communications in Physical Layer , 2015, Comput. Commun. Rev..

[41]  A. Salman Avestimehr,et al.  A Fundamental Tradeoff Between Computation and Communication in Distributed Computing , 2016, IEEE Transactions on Information Theory.

[42]  Mohammad Ali Maddah-Ali,et al.  Coded distributed computing: Fundamental limits and practical challenges , 2016, 2016 50th Asilomar Conference on Signals, Systems and Computers.