暂无分享,去创建一个
[1] Lixin Fan,et al. Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks , 2020, Federated Learning.
[2] Jonas Geiping,et al. Inverting Gradients - How easy is it to break privacy in federated learning? , 2020, NeurIPS.
[3] Ruby B. Lee,et al. Model inversion attacks against collaborative inference , 2019, ACSAC.
[4] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[5] Maria Rigaki,et al. A Survey of Privacy Attacks in Machine Learning , 2020, ArXiv.
[6] Yang Song,et al. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning , 2018, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.
[7] Wenqi Wei,et al. A Framework for Evaluating Client Privacy Leakages in Federated Learning , 2020, ESORICS.
[8] Zhenkai Liang,et al. Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment , 2019, CCS.
[9] Dawn Song,et al. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Bo Zhao,et al. iDLG: Improved Deep Leakage from Gradients , 2020, ArXiv.
[11] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[12] Emiliano De Cristofaro. An Overview of Privacy in Machine Learning , 2020, ArXiv.
[13] Gene H. Golub,et al. Matrix computations , 1983 .
[14] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[15] PhongLe Trieu,et al. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018 .
[16] Song Han,et al. Deep Leakage from Gradients , 2019, NeurIPS.
[17] Blaise Aguera y Arcas presents a new way to look at digital images , 2009 .