Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR-10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.

[1]  Xiaodong Lin,et al.  HealthDep: An Efficient and Secure Deduplication Scheme for Cloud-Assisted eHealth Systems , 2018, IEEE Transactions on Industrial Informatics.

[2]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[3]  Anupam Joshi,et al.  NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion , 2020, 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS).

[4]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[5]  Yuwei Sun,et al.  Blockchain-Based Federated Learning Against End-Point Adversarial Data Corruption , 2020, 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).

[6]  Di Cao,et al.  Understanding Distributed Poisoning Attack in Federated Learning , 2019, 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS).

[7]  Yan Zhang,et al.  Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT , 2020, IEEE Transactions on Industrial Informatics.

[8]  Xiang Cheng,et al.  PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems , 2021, IEEE Internet of Things Journal.

[9]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  Jiliang Tang,et al.  Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.

[11]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[12]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[13]  Ramesh Raskar,et al.  NoPeek: Information leakage reduction to share activations in distributed deep learning , 2020, 2020 International Conference on Data Mining Workshops (ICDMW).

[14]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.