Wireless distributed computing can be widely applied in the network edges to complete large-scale computing tasks. However, the existence of stragglers (nodes with less computing resources) and the heavy communication among nodes are two main bottlenecks in distributed systems which can affect the performance of distributed computing. Previous work [1] has proposed a unified framework to deal with the above two issues simultaneously in wireline scenarios. In this paper, we extend the framework in [1] to wireless distributed computing, where mobile devices are connected through an access point and collaborate for calculating massive distributed matrix multiplication. We allow a subset of devices first finishing local computations to participate in the output of the calculated result, which means the remaining slow devices (stragglers) are mitigated. In our extended framework, the assignment of output vectors is flexible, where the number of output vectors assigned for different devices can be different when the total output vectors cannot be divided equally among them. And the uplink transmission bandwidth from mobile devices to the access point is reduced compared to the wireline bandwidth in [1] due to different transmission scheme. After the uplink transmission, the downlink transmission bandwidth from the access point to mobile devices is further reduced by new coding technique. Furthermore, we prove information-theoretic lower bounds on uplink and downlink transmission loads respectively for flexible assignment of output vectors. And we derive the constant gaps when all devices finish their local computations if total output vectors can be assigned equally among them.
[1]
Wazir Zada Khan,et al.
Edge computing: A survey
,
2019,
Future Gener. Comput. Syst..
[2]
Kannan Ramchandran,et al.
Speeding Up Distributed Machine Learning Using Codes
,
2015,
IEEE Transactions on Information Theory.
[3]
Mohammad Ali Maddah-Ali,et al.
Coding for Distributed Fog Computing
,
2017,
IEEE Communications Magazine.
[4]
Urs Niesen,et al.
Fundamental limits of caching
,
2012,
2013 IEEE International Symposium on Information Theory.
[5]
Mohammad Ali Maddah-Ali,et al.
Compressed Coded Distributed Computing
,
2018,
2018 IEEE International Symposium on Information Theory (ISIT).
[6]
Alexandros G. Dimakis,et al.
Gradient Coding: Avoiding Stragglers in Distributed Learning
,
2017,
ICML.
[7]
Guanding Yu,et al.
Accelerating DNN Training in Wireless Federated Edge Learning Systems
,
2019,
IEEE Journal on Selected Areas in Communications.
[8]
Mohammad Ali Maddah-Ali,et al.
A Unified Coding Framework for Distributed Computing with Straggling Servers
,
2016,
2016 IEEE Globecom Workshops (GC Wkshps).
[9]
A. Salman Avestimehr,et al.
A Fundamental Tradeoff Between Computation and Communication in Distributed Computing
,
2016,
IEEE Transactions on Information Theory.
[10]
A. Salman Avestimehr,et al.
A Scalable Framework for Wireless Distributed Computing
,
2016,
IEEE/ACM Transactions on Networking.
[11]
Sanjay Ghemawat,et al.
MapReduce: Simplified Data Processing on Large Clusters
,
2004,
OSDI.