Privacy-Preserving Asynchronous Federated Learning Mechanism for Edge Network Computing

In the traditional cloud architecture, data needs to be uploaded to the cloud for processing, bringing delays in transmission and response. Edge network emerges as the times require. Data processing on the edge nodes can reduce the delay of data transmission and improve the response speed. In recent years, the need for artificial intelligence of edge network has been proposed. However, the data of a single, individual edge node is limited and does not satisfy the conditions of machine learning. Therefore, performing edge network machine learning under the premise of data confidentiality became a research hotspot. This paper proposes a Privacy-Preserving Asynchronous Federated Learning Mechanism for Edge Network Computing (PAFLM), which can allow multiple edge nodes to achieve more efficient federated learning without sharing their private data. Compared with the traditional distributed learning, the proposed method compresses the communications between nodes and parameter server during the training process without affecting the accuracy. Moreover, it allows the node to join or quit in any process of learning, which can be suitable to the scene with highly mobile edge devices.

[1]  Rajkumar Buyya,et al.  Fog Computing: Helping the Internet of Things Realize Its Potential , 2016, Computer.

[2]  Nikko Strom,et al.  Scalable distributed DNN training using commodity GPU cloud computing , 2015, INTERSPEECH.

[3]  Jian Sun,et al.  Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[4]  Kin K. Leung,et al.  Demonstration of Federated Learning in a Resource-Constrained Networked Environment , 2019, 2019 IEEE International Conference on Smart Computing (SMARTCOMP).

[5]  Wei Zhang,et al.  AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training , 2017, AAAI.

[6]  Yue Zhang,et al.  DeepChain: Auditable and Privacy-Preserving Deep Learning with Blockchain-Based Incentive , 2019, IEEE Transactions on Dependable and Secure Computing.

[7]  Dong Yu,et al.  1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs , 2014, INTERSPEECH.

[8]  Hao Liang,et al.  Optimal Workload Allocation in Fog-Cloud Computing Toward Balanced Delay and Power Consumption , 2016, IEEE Internet of Things Journal.

[9]  Wei Shi,et al.  Federated learning of predictive models from federated Electronic Health Records , 2018, Int. J. Medical Informatics.

[10]  Doan B. Hoang,et al.  A data protection model for fog computing , 2017, 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC).

[11]  Choong Seon Hong,et al.  Blockchain-based Node-aware Dynamic Weighting Methods for Improving Federated Learning Performance , 2019, 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS).

[12]  Weisong Shi,et al.  Edge Computing: Vision and Challenges , 2016, IEEE Internet of Things Journal.

[13]  Cong Xu,et al.  TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning , 2017, NIPS.

[14]  Yan Zhang,et al.  Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT , 2020, IEEE Transactions on Industrial Informatics.

[15]  Walid Saad,et al.  Federated Learning for Ultra-Reliable Low-Latency V2V Communications , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[16]  Roch H. Glitho,et al.  A Comprehensive Survey on Fog Computing: State-of-the-Art and Research Challenges , 2017, IEEE Communications Surveys & Tutorials.

[17]  Georgios B. Giannakis,et al.  LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning , 2018, NeurIPS.

[18]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[19]  Kin K. Leung,et al.  Adaptive Federated Learning in Resource Constrained Edge Computing Systems , 2018, IEEE Journal on Selected Areas in Communications.

[20]  Choong Seon Hong,et al.  FLchain: Federated Learning via MEC-enabled Blockchain Network , 2019, 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS).

[21]  Kenneth Heafield,et al.  Sparse Communication for Distributed Gradient Descent , 2017, EMNLP.

[22]  Sam Ade Jacobs,et al.  Communication Quantization for Data-Parallel Training of Deep Neural Networks , 2016, 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC).

[23]  Sudip Misra,et al.  Assessment of the Suitability of Fog Computing in the Context of Internet of Things , 2018, IEEE Transactions on Cloud Computing.

[24]  Hubert Eichner,et al.  Towards Federated Learning at Scale: System Design , 2019, SysML.

[25]  Tao Zhang,et al.  Fog and IoT: An Overview of Research Opportunities , 2016, IEEE Internet of Things Journal.