暂无分享,去创建一个
[1] Hamed Haddadi,et al. DarkneTZ: towards model privacy at the edge using trusted execution environments , 2020, MobiSys.
[2] Reza Shokri,et al. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks , 2018, ArXiv.
[3] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[4] Sushil Jajodia,et al. Exploring steganography: Seeing the unseen , 1998, Computer.
[5] Florian Kerschbaum,et al. On the Robustness of Backdoor-based Watermarking in Deep Neural Networks , 2019, IH&MMSec.
[6] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[7] Liwei Song,et al. A Critical Evaluation of Open-World Machine Learning , 2020, ArXiv.
[8] Samuel Marchal,et al. DAWN: Dynamic Adversarial Watermarking of Neural Networks , 2019, ACM Multimedia.
[9] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[10] Junpu Wang,et al. FedMD: Heterogenous Federated Learning via Model Distillation , 2019, ArXiv.
[11] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[12] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[13] Thierry Pun,et al. Robust template matching for affine resistant image watermarks , 2000, IEEE Trans. Image Process..
[14] Shin'ichi Satoh,et al. Embedding Watermarks into Deep Neural Networks , 2017, ICMR.
[15] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[16] Samuel Marchal,et al. PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).
[17] Brendan Dolan-Gavitt,et al. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.
[18] Amir Houmansadr,et al. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[19] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[20] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[21] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[22] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[23] Farinaz Koushanfar,et al. DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models , 2019, ICMR.
[24] Farinaz Koushanfar,et al. DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks , 2019, ASPLOS.
[25] Ben Y. Zhao,et al. Piracy Resistant Watermarks for Deep Neural Networks. , 2019 .
[26] Hui Wu,et al. Protecting Intellectual Property of Deep Neural Networks with Watermarking , 2018, AsiaCCS.
[27] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[28] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[29] Tribhuvanesh Orekondy,et al. Knockoff Nets: Stealing Functionality of Black-Box Models , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[30] A. K. Singh,et al. A novel technique for digital image watermarking in spatial domain , 2012, 2012 2nd IEEE International Conference on Parallel, Distributed and Grid Computing.
[31] Tassilo Klein,et al. Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.
[32] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[33] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[34] Daniel Rueckert,et al. A generic framework for privacy preserving deep learning , 2018, ArXiv.
[35] Lili Su,et al. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.
[36] Erwan Le Merrer,et al. Adversarial frontier stitching for remote neural network watermarking , 2017, Neural Computing and Applications.
[37] Farinaz Koushanfar,et al. Performance Comparison of Contemporary DNN Watermarking Techniques , 2018, ArXiv.
[38] Miodrag Potkonjak,et al. Watermarking Deep Neural Networks for Embedded Systems , 2018, 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).
[39] Kyogu Lee,et al. Digital Watermarking For Protecting Audio Classification Datasets , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[40] Dmitry P. Vetrov,et al. Structured Bayesian Pruning via Log-Normal Multiplicative Noise , 2017, NIPS.
[41] Cordelia Schmid,et al. Radioactive data: tracing through training , 2020, ICML.
[42] Benny Pinkas,et al. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.
[43] Vinod Ganapathy,et al. ActiveThief: Model Extraction Using Active Learning and Unannotated Public Data , 2020, AAAI.
[44] Shanqing Guo,et al. How to prove your model belongs to you: a blind-watermark based framework to protect intellectual property of DNN , 2019, ACSAC.
[45] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[46] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[47] Lixin Fan,et al. Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks , 2019, NeurIPS.
[48] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[49] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[50] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.