Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms

[1]  V. Palade,et al.  Constrained Generative Adversarial Learning for Dimensionality Reduction , 2023, IEEE Transactions on Knowledge and Data Engineering.

[2]  M. Saif,et al.  Generative-Adversarial Class-Imbalance Learning for Classifying Cyber-Attacks and Faults - A Cyber-Physical Power System , 2022, IEEE Transactions on Dependable and Secure Computing.

[3]  M. Saif,et al.  To Tolerate or To Impute Missing Values in V2X Communications Data? , 2022, IEEE Internet of Things Journal.

[4]  Hossein Hassani,et al.  Real-time out-of-step prediction control to prevent emerging blackouts in power systems: A reinforcement learning approach , 2022, Applied Energy.

[5]  Manjushree B. Aithal,et al.  Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation , 2021, IEEE Access.

[6]  Maryam Farajzadeh-Zanjani,et al.  Generative adversarial dimensionality reduction for diagnosing faults and attacks in cyber-physical systems , 2021, Neurocomputing.

[7]  Diego Perino,et al.  PPFL: privacy-preserving federated learning with trusted execution environments , 2021, MobiSys.

[8]  Roozbeh Razavi-Far,et al.  Unsupervised concrete feature selection based on mutual information for diagnosing faults and cyber-attacks in power systems , 2021, Eng. Appl. Artif. Intell..

[9]  M. Saif,et al.  DLIN: Deep Ladder Imputation Network , 2021, IEEE Transactions on Cybernetics.

[10]  Farinaz Koushanfar,et al.  A Taxonomy of Attacks on Federated Learning , 2021, IEEE Security & Privacy.

[11]  Masood Parvania,et al.  Adversarial Semi-Supervised Learning for Diagnosing Faults and Attacks in Power Grids , 2021, IEEE Transactions on Smart Grid.

[12]  Virginia Smith,et al.  Ditto: Fair and Robust Federated Learning Through Personalization , 2020, ICML.

[13]  Philip S. Yu,et al.  Privacy and Robustness in Federated Learning: Attacks and Defenses , 2020, IEEE transactions on neural networks and learning systems.

[14]  Lingjuan Lyu,et al.  A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning , 2020, 2011.10464.

[15]  Yanyang Lu,et al.  An Efficient and Robust Aggregation Algorithm for Learning Federated CNN , 2020, SPML.

[16]  Sudipan Saha,et al.  Federated Transfer Learning: concept and applications , 2020, Intelligenza Artificiale.

[17]  Daniel Rueckert,et al.  Robust Aggregation for Adaptive Privacy Preserving Federated Learning in Healthcare , 2020, ArXiv.

[18]  L. Lyu,et al.  Federated Model Distillation with Noise-Free Differential Privacy , 2020, IJCAI.

[19]  Yang Zou,et al.  Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning , 2020, ArXiv.

[20]  Kartik Sreenivasan,et al.  Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.

[21]  Lingjuan Lyu,et al.  How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning , 2020, IEEE Transactions on Dependable and Secure Computing.

[22]  Tianjian Chen,et al.  A Secure Federated Transfer Learning Framework , 2020, IEEE Intelligent Systems.

[23]  Tao Xiang,et al.  A training-integrity privacy-preserving federated learning scheme with trusted execution environment , 2020, Inf. Sci..

[24]  Xiaochun Cao,et al.  FedSteg: A Federated Transfer Learning Framework for Secure Image Steganalysis , 2020, IEEE Transactions on Network Science and Engineering.

[25]  Bo Li,et al.  DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.

[26]  Han Yu,et al.  Threats to Federated Learning: A Survey , 2020, ArXiv.

[27]  Diana Marculescu,et al.  Improving the Adversarial Robustness of Transfer Learning via Noisy Feature Distillation , 2020, ArXiv.

[28]  Tianjian Chen,et al.  Learning to Detect Malicious Clients for Robust Federated Learning , 2020, ArXiv.

[29]  Shuo Wang,et al.  Backdoor Attacks Against Transfer Learning With Pre-Trained Deep Learning Models , 2020, IEEE Transactions on Services Computing.

[30]  Zaïd Harchaoui,et al.  Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.

[31]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..

[32]  Han Yu,et al.  Privacy-preserving Heterogeneous Federated Transfer Learning , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[33]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[34]  Maryam Farajzadeh-Zanjani,et al.  Imputation-Based Ensemble Techniques for Class Imbalance Learning , 2019, IEEE Transactions on Knowledge and Data Engineering.

[35]  Ben Y. Zhao,et al.  Latent Backdoor Attacks on Deep Neural Networks , 2019, CCS.

[36]  Li Chen,et al.  Robust Federated Learning With Noisy Communication , 2019, IEEE Transactions on Communications.

[37]  Chaoping Xing,et al.  Secure and Efficient Federated Transfer Learning , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[38]  Yang Liu,et al.  Abnormal Client Behavior Detection in Federated Learning , 2019, ArXiv.

[39]  Junpu Wang,et al.  FedMD: Heterogenous Federated Learning via Model Distillation , 2019, ArXiv.

[40]  Bo Li,et al.  Attack-Resistant Federated Learning with Residual-based Reweighting , 2019, ArXiv.

[41]  N. Gong,et al.  MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.

[42]  Xin Qin,et al.  FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare , 2019, IEEE Intelligent Systems.

[43]  Song Han,et al.  Deep Leakage from Gradients , 2019, NeurIPS.

[44]  Sailik Sengupta,et al.  A Survey of Moving Target Defenses for Network Security , 2019, IEEE Communications Surveys & Tutorials.

[45]  Dan Boneh,et al.  Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.

[46]  Larry S. Davis,et al.  Adversarial Training for Free! , 2019, NeurIPS.

[47]  Shahbaz Rezaei,et al.  A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning , 2019, ICLR.

[48]  Ben Y. Zhao,et al.  Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[49]  Jörn-Henrik Jacobsen,et al.  Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness , 2019, ArXiv.

[50]  Justin Hsu,et al.  Data Poisoning against Differentially-Private Learners: Attacks and Defenses , 2019, IJCAI.

[51]  David Evans,et al.  Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.

[52]  Qiang Yang,et al.  Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..

[53]  Francesca Bovolo,et al.  Unsupervised Deep Change Vector Analysis for Multiple-Change Detection in VHR Images , 2019, IEEE Transactions on Geoscience and Remote Sensing.

[54]  Wouter Joosen,et al.  Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study , 2018, Applied Sciences.

[55]  Anit Kumar Sahu,et al.  On the Convergence of Federated Optimization in Heterogeneous Networks , 2018, ArXiv.

[56]  Alan L. Yuille,et al.  Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[57]  Prateek Mittal,et al.  Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.

[58]  Kamyar Azizzadenesheli,et al.  signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.

[59]  Sebastian Caldas,et al.  Expanding the Reach of Federated Learning by Reducing Client Resource Requirements , 2018, ArXiv.

[60]  Ivan Beschastnikh,et al.  Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.

[61]  Cheng Lei,et al.  Moving Target Defense Techniques: A Survey , 2018, Secur. Commun. Networks.

[62]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[63]  Mario Fritz,et al.  ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.

[64]  Brendan Dolan-Gavitt,et al.  Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.

[65]  Sanjiv Kumar,et al.  cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.

[66]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[67]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[68]  Xiaoqian Jiang,et al.  Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation , 2018, IACR Cryptol. ePrint Arch..

[69]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[70]  Reza Shokri,et al.  Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.

[71]  Farinaz Koushanfar,et al.  Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..

[72]  Tassilo Klein,et al.  Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.

[73]  Aleksander Madry,et al.  Exploring the Landscape of Spatial Robustness , 2017, ICML.

[74]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[75]  Richard Nock,et al.  Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption , 2017, ArXiv.

[76]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[77]  Ankur Srivastava,et al.  Neural Trojans , 2017, 2017 IEEE International Conference on Computer Design (ICCD).

[78]  Somesh Jha,et al.  Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).

[79]  Li Xiong,et al.  A Comprehensive Comparison of Multiparty Secure Additions with Differential Privacy , 2017, IEEE Transactions on Dependable and Secure Computing.

[80]  Ameet Talwalkar,et al.  Federated Multi-Task Learning , 2017, NIPS.

[81]  Payman Mohassel,et al.  SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[82]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[83]  Michael Backes,et al.  Membership Privacy in MicroRNA-based Studies , 2016, CCS.

[84]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[85]  Martín Abadi,et al.  Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.

[86]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[87]  Marc Joye,et al.  A New Framework for Privacy-Preserving Aggregation of Time-Series Data , 2016, TSEC.

[88]  Yoshinori Aono,et al.  Scalable and Secure Logistic Regression via Homomorphic Encryption , 2016, IACR Cryptol. ePrint Arch..

[89]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[90]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[91]  Minghui Zhu,et al.  Comparing Different Moving Target Defense Techniques , 2014, MTD '14.

[92]  Somesh Jha,et al.  Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.

[93]  Ninghui Li,et al.  Membership privacy: a unifying framework for privacy definitions , 2013, CCS.

[94]  Richard Colbaugh,et al.  Moving target defense for adaptive adversaries , 2013, 2013 IEEE International Conference on Intelligence and Security Informatics.

[95]  Craig Gentry,et al.  Pinocchio: Nearly Practical Verifiable Computation , 2013, 2013 IEEE Symposium on Security and Privacy.

[96]  Martin J. Wainwright,et al.  Local privacy and statistical minimax rates , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[97]  Ivan Damgård,et al.  Multiparty Computation from Somewhat Homomorphic Encryption , 2012, IACR Cryptol. ePrint Arch..

[98]  Blaine Nelson,et al.  Support Vector Machines Under Adversarial Label Noise , 2011, ACML.

[99]  Claude Castelluccia,et al.  I Have a DREAM! (DiffeRentially privatE smArt Metering) , 2011, Information Hiding.

[100]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[101]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[102]  Suman Nath,et al.  Differentially private aggregation of distributed time-series with transformation and encryption , 2010, SIGMOD Conference.

[103]  Craig Gentry,et al.  Fully homomorphic encryption using ideal lattices , 2009, STOC '09.

[104]  Moni Naor,et al.  Our Data, Ourselves: Privacy Via Distributed Noise Generation , 2006, EUROCRYPT.

[105]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[106]  Chris Clifton,et al.  Privacy-preserving distributed mining of association rules on horizontally partitioned data , 2004, IEEE Transactions on Knowledge and Data Engineering.

[107]  Pascal Paillier,et al.  Public-Key Cryptosystems Based on Composite Degree Residuosity Classes , 1999, EUROCRYPT.

[108]  Silvio Micali,et al.  The knowledge complexity of interactive proof-systems , 1985, STOC '85.

[109]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982).

[110]  M. Saif,et al.  Embedding Time-Series Features into Generative Adversarial Networks for Intrusion Detection in Internet of Things Networks , 2022, Intelligent Systems Reference Library.

[111]  V. Palade,et al.  Generative Adversarial Networks: A Survey on Training, Variants, and Applications , 2022, Intelligent Systems Reference Library.

[112]  Prasant Mohapatra,et al.  Vulnerabilities in Federated Learning , 2021, IEEE Access.

[113]  Ben Y. Zhao,et al.  With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning , 2018, USENIX Security Symposium.

[114]  Shai Halevi,et al.  Homomorphic Encryption , 2017, Tutorials on the Foundations of Cryptography.

[115]  Jason Brownlee,et al.  Complex adaptive systems , 2007 .

[116]  Gu Si-yang,et al.  Privacy preserving association rule mining in vertically partitioned data , 2006 .

[117]  Adi Shamir,et al.  A method for obtaining digital signatures and public-key cryptosystems , 1978, CACM.