Wireless MapReduce Distributed Computing

Motivated by mobile edge computing and wireless data centers, we study a wireless distributed computing framework where the distributed nodes exchange information over a wireless interference network. Our framework follows the structure of MapReduce. This framework consists of Map, Shuffle, and Reduce phases, where Map and Reduce are computation phases and Shuffle is a data transmission phase. In our setting, we assume that the transmission is operated over a wireless interference network. We demonstrate that, by duplicating the computation work at a cluster of distributed nodes in the Map phase, one can reduce the amount of transmission load required for the Shuffle phase. In this work, we characterize the fundamental tradeoff between computation load and communication load, under the assumption of one-shot linear schemes. The proposed scheme is based on side information cancellation and zero-forcing, and we prove that it is optimal in terms of computation-communication tradeoff. The proposed scheme outperforms the naive TDMA scheme with single node transmission at a time, as well as the coded TDMA scheme that allows coding across data, in terms of the computation-communication tradeoff.

[1]  Petros Elia,et al.  Coded Distributed Computing with Node Cooperation Substantially Increases Speedup Factors , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[2]  Mohammad Ali Maddah-Ali,et al.  Coded MapReduce , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[3]  James Won-Ki Hong,et al.  Towards a distributed computing framework for Fog , 2017, 2017 IEEE Fog World Congress (FWC).

[4]  Mohammad Ali Maddah-Ali,et al.  Compressed Coded Distributed Computing , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[5]  Rong-Rong Chen,et al.  A New Combinatorial Design of Coded Distributed Computing , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[6]  Eugene Marinelli,et al.  Hyrax: Cloud Computing on Mobile Devices using MapReduce , 2009 .

[7]  Giuseppe Caire,et al.  Fundamental Limits of Caching in Wireless D2D Networks , 2014, IEEE Transactions on Information Theory.

[8]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[9]  Ravi Tandon,et al.  On the worst-case communication overhead for distributed data shuffling , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  Qun Li,et al.  A Survey of Fog Computing: Concepts, Applications and Issues , 2015, Mobidata@MobiHoc.

[11]  Christina Fragouli,et al.  Communication vs distributed computation: An alternative trade-off curve , 2017, 2017 IEEE Information Theory Workshop (ITW).

[12]  Suhas N. Diggavi,et al.  Degrees of Freedom of Cache-Aided Wireless Interference Networks , 2016, IEEE Transactions on Information Theory.

[13]  Alexandros G. Dimakis,et al.  Gradient Coding: Avoiding Stragglers in Distributed Learning , 2017, ICML.

[14]  Meixia Tao,et al.  Fundamental Tradeoff Between Storage and Latency in Cache-Aided Wireless Interference Networks , 2016, IEEE Transactions on Information Theory.

[15]  Osvaldo Simeone,et al.  Fog-Aided Wireless Networks for Content Delivery: Fundamental Latency Tradeoffs , 2016, IEEE Transactions on Information Theory.

[16]  Mohammad Ali Maddah-Ali,et al.  How to optimally allocate resources for coded distributed computing? , 2017, 2017 IEEE International Conference on Communications (ICC).

[17]  Jaekyun Moon,et al.  Hierarchical Coding for Distributed Computing , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[18]  Amir Salman Avestimehr,et al.  Coded Computing for Distributed Graph Analytics , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[19]  Mohammad Ali Maddah-Ali,et al.  Fundamental Limits of Cache-Aided Interference Management , 2017, IEEE Trans. Inf. Theory.

[20]  Mohammad Ali Maddah-Ali,et al.  Communication-aware computing for edge processing , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[21]  Mahadev Satyanarayanan,et al.  The Emergence of Edge Computing , 2017, Computer.

[22]  Christina Fragouli,et al.  Distributed Computing Trade-offs with Random Connectivity , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[23]  Mohammad Ali Maddah-Ali,et al.  Coded Distributed Computing: Straggling Servers and Multistage Dataflows , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[24]  Kannan Ramchandran,et al.  High-dimensional coded matrix multiplication , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[25]  Christina Fragouli,et al.  A pliable index coding approach to data shuffling , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[26]  Pulkit Grover,et al.  “Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products , 2017, IEEE Transactions on Information Theory.

[27]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[28]  Urs Niesen,et al.  Cache-aided interference channels , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[29]  Suhas N. Diggavi,et al.  Encoded distributed optimization , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[30]  Mohammad Ali Maddah-Ali,et al.  Coding for Distributed Fog Computing , 2017, IEEE Communications Magazine.

[31]  Ravi Tandon,et al.  Combating Computational Heterogeneity in Large-Scale Distributed Computing via Work Exchange , 2017, ArXiv.

[32]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[33]  Ravi Prakash,et al.  60 GHz wireless links in data center networks , 2014, Comput. Networks.

[34]  Kannan Ramchandran,et al.  Speeding Up Distributed Machine Learning Using Codes , 2015, IEEE Transactions on Information Theory.

[35]  Mohammad Ali Maddah-Ali,et al.  Coded distributed computing: Fundamental limits and practical challenges , 2016, 2016 50th Asilomar Conference on Signals, Systems and Computers.

[36]  A. Salman Avestimehr,et al.  A Fundamental Tradeoff Between Computation and Communication in Distributed Computing , 2016, IEEE Transactions on Information Theory.

[37]  Fan Li,et al.  Distributed Computing with Heterogeneous Communication Constraints: The Worst-Case Computation Load and Proof by Contradiction , 2018, ArXiv.

[38]  Ramtin Pedarsani,et al.  Latency analysis of coded computation schemes over wireless networks , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[39]  A. Salman Avestimehr,et al.  A Scalable Framework for Wireless Distributed Computing , 2016, IEEE/ACM Transactions on Networking.

[40]  Amir Salman Avestimehr,et al.  Coded computation over heterogeneous clusters , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[41]  Pulkit Grover,et al.  Coded convolution for parallel and distributed computing within a deadline , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[42]  Aditya Ramamoorthy,et al.  Leveraging Coding Techniques for Speeding up Distributed Computing , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[43]  Sanjiv Kumar,et al.  cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.

[44]  Ravi Tandon,et al.  Information Theoretic Limits of Data Shuffling for Distributed Learning , 2016, 2016 IEEE Global Communications Conference (GLOBECOM).