A Clonal Selection Optimization System for Multiparty Secure Computing

The innovation of the deep learning modeling scheme plays an important role in promoting the research of complex problems handled with artificial intelligence in smart cities and the development of the next generation of information technology. With the widespread use of smart interactive devices and systems, the exponential growth of data volume and the complex modeling requirements increase the difficulty of deep learning modeling, and the classical centralized deep learning modeling scheme has encountered bottlenecks in the improvement of model performance and the diversification of smart application scenarios. The parallel processing system in deep learning links the virtual information space with the physical world, although the distributed deep learning research has become a crucial concern with its unique advantages in training efficiency, and improving the availability of trained models and preventing privacy disclosure are still the main challenges faced by related research. To address these above issues in distributed deep learning, this research developed a clonal selective optimization system based on the federated learning framework for the model training process involving large-scale data. This system adopts the heuristic clonal selective strategy in local model optimization and optimizes the effect of federated training. First of all, this process enhances the adaptability and robustness of the federated learning scheme and improves the modeling performance and training efficiency. Furthermore, this research attempts to improve the privacy security defense capability of the federated learning scheme for big data through differential privacy preprocessing. The simulation results show that the proposed clonal selection optimization system based on federated learning has significant optimization ability on model basic performance, stability, and privacy.

[1]  Avinash Ratre,et al.  Stochastic Gradient Descent-Whale Optimization Algorithm-Based Deep Convolutional Neural Network To Crowd Emotion Understanding , 2020, Comput. J..

[2]  Prateek Jain,et al.  Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification , 2016, J. Mach. Learn. Res..

[3]  A Learning Rate Method for Full-Batch Gradient Descent , 2020 .

[4]  Tianjian Chen,et al.  A Secure Federated Transfer Learning Framework , 2020, IEEE Intelligent Systems.

[5]  Jie Liu,et al.  Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting , 2015, IEEE Journal of Selected Topics in Signal Processing.

[6]  Fernando Niño,et al.  Recent Advances in Artificial Immune Systems: Models and Applications , 2011, Appl. Soft Comput..

[7]  María S. Pérez-Hernández,et al.  Using machine learning to optimize parallelism in big data applications , 2017, Future Gener. Comput. Syst..

[8]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982).

[9]  Mehdi Mokhtarzade,et al.  COMPARISON OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM IN RATIONAL FUNCTION MODEL OPTIMIZATION , 2012 .

[10]  Kin K. Leung,et al.  Adaptive Federated Learning in Resource Constrained Edge Computing Systems , 2018, IEEE Journal on Selected Areas in Communications.

[11]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[12]  Praneeth Netrapalli,et al.  Stochastic Gradient Descent and Its Variants in Machine Learning , 2019, Journal of the Indian Institute of Science.

[13]  Alexander J. Smola,et al.  Efficient mini-batch training for stochastic optimization , 2014, KDD.

[14]  Tasha Glenn,et al.  Privacy in the Digital World: Medical and Health Data Outside of HIPAA Protections , 2014, Current Psychiatry Reports.

[15]  Yaoliang Yu,et al.  Petuum: A New Platform for Distributed Machine Learning on Big Data , 2015, IEEE Trans. Big Data.

[16]  Xiaomin Zhu,et al.  Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks , 2020, Inf. Sci..

[17]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[18]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[19]  Cynthia Dwork,et al.  Differential Privacy: A Survey of Results , 2008, TAMC.

[20]  Siddhartha Bhattacharyya,et al.  Genetic programming in classifying large-scale data: an ensemble method , 2004, Inf. Sci..

[21]  José M. Troya,et al.  A Border-based Coordination Language for Integrating Task and Data Parallelism , 2002, J. Parallel Distributed Comput..

[22]  Puneet Gupta,et al.  Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training , 2019, IEEE Micro.

[23]  Latanya Sweeney,et al.  k-Anonymity: A Model for Protecting Privacy , 2002, Int. J. Uncertain. Fuzziness Knowl. Based Syst..

[24]  Richard Barnes,et al.  Distributed Parallel D8 Up-Slope Area Calculation in Digital Elevation Models , 2016, ArXiv.

[25]  E. Airoldi,et al.  Asymptotic and finite-sample properties of estimators based on stochastic gradients , 2014 .

[26]  Qiang Yang,et al.  Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..

[27]  Maryam Sadat Hashemipour,et al.  Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines , 2016, Connect. Sci..

[28]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[29]  Dianhui Wang,et al.  A decentralized training algorithm for Echo State Networks in distributed big data applications , 2016, Neural Networks.