ShuffleFL: gradient-preserving federated learning using trusted execution environment

Federated Learning (FL) is a promising approach to privacy-preserving machine learning. However, recent works reveal that gradients can leak private data. Using trusted SGX-processors for this task yields gradient-preserving but requires to prevent exploitation of any side-channel attacks. In this work, we present ShuffleFL, a gradient-preserving system using trusted SGX, which combines random group structure and intra-group gradient segment aggregation for combating any side-channel attacks. We analyze the security of our system against semi-honest adversaries. ShuffleFL effectively guarantees the participants' gradient privacy. We demonstrate the performance of ShuffleFL and show its applicability in the federated learning system.

[1]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[2]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[3]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[4]  Srdjan Capkun,et al.  Software Grand Exposure: SGX Cache Attacks Are Practical , 2017, WOOT.

[5]  Marcus Peinado,et al.  Inferring Fine-grained Control Flow Inside SGX Enclaves with Branch Shadowing , 2016, USENIX Security Symposium.

[6]  Gernot Heiser,et al.  Last-Level Cache Side-Channel Attacks are Practical , 2015, 2015 IEEE Symposium on Security and Privacy.

[7]  Johannes Götzfried,et al.  Cache Attacks on Intel SGX , 2017, EUROSEC.

[8]  Sorin Lerner,et al.  On Subnormal Floating Point and Abnormal Timing , 2015, 2015 IEEE Symposium on Security and Privacy.

[9]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[10]  Shweta Shinde,et al.  Preventing Page Faults from Telling Your Secrets , 2016, AsiaCCS.

[11]  Dawn Xiaodong Song,et al.  Efficient Deep Learning on Multi-Source Private Data , 2018, ArXiv.

[12]  Vitaly Shmatikov,et al.  Inference Attacks Against Collaborative Learning , 2018, ArXiv.

[13]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[14]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[15]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[16]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[17]  Stefan Mangard,et al.  Malware Guard Extension: Using SGX to Conceal Cache Attacks , 2017, DIMVA.

[18]  Marcus Peinado,et al.  Controlled-Channel Attacks: Deterministic Side Channels for Untrusted Operating Systems , 2015, 2015 IEEE Symposium on Security and Privacy.

[19]  Song Han,et al.  Deep Leakage from Gradients , 2019, NeurIPS.

[20]  Sebastian Nowozin,et al.  Oblivious Multi-Party Machine Learning on Trusted Processors , 2016, USENIX Security Symposium.

[21]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[22]  Carl A. Gunter,et al.  Leaky Cauldron on the Dark Land: Understanding Memory Side-Channel Hazards in SGX , 2017, CCS.

[23]  Craig Gentry,et al.  Implementing Gentry's Fully-Homomorphic Encryption Scheme , 2011, EUROCRYPT.