Model Sparsity Can Simplify Machine Unlearning
暂无分享,去创建一个
Yang Liu | P. Ram | Pranay Sharma | Yuguang Yao | Sijia Liu | Jiancheng Liu | Jinghan Jia | Gaowen Liu | Parikshit Ram
[1] A. Madry,et al. FFCV: Accelerating Training by Removing Data Bottlenecks , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Zhangyang Wang,et al. Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models , 2023, ArXiv.
[3] David Bau,et al. Erasing Concepts from Diffusion Models , 2023, 2023 IEEE/CVF International Conference on Computer Vision (ICCV).
[4] M. Zitnik,et al. GNNDelete: A General Strategy for Unlearning in Graph Neural Networks , 2023, ICLR.
[5] Ayush Sekhari,et al. Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks , 2022, ArXiv.
[6] Tao Zhao,et al. Machine unlearning survey , 2022, Conference on Mechatronics and Computer Technology Engineering.
[7] Kush R. Varshney,et al. Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting , 2022, NeurIPS.
[8] Jinyuan Jia,et al. FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information , 2022, 2023 IEEE Symposium on Security and Privacy (SP).
[9] Sijia Liu,et al. Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices , 2022, 2023 28th Asia and South Pacific Design Automation Conference (ASP-DAC).
[10] P. Ram,et al. Advancing Model Pruning via Bi-level Optimization , 2022, NeurIPS.
[11] Alan Wee-Chung Liew,et al. A Survey of Machine Unlearning , 2022, ArXiv.
[12] T. Liebig,et al. Evaluating Machine Unlearning via Epistemic Uncertainty , 2022, ArXiv.
[13] Xinchao Wang,et al. Learning with Recoverable Forgetting , 2022, ECCV.
[14] A. Madry,et al. A Data-Based Perspective on Transfer Learning , 2022, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Yang Liu,et al. Understanding Instance-Level Impact of Fairness Constraints , 2022, ICML.
[16] O. Milenkovic,et al. Certified Graph Unlearning , 2022, ArXiv.
[17] Zhangyang Wang,et al. Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Zeke Xie,et al. Dataset Pruning: Reducing Training Data by Examining Generalization Influence , 2022, ICLR.
[19] Xingliang Yuan,et al. The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining , 2022, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications.
[20] Zhuo Ma,et al. Backdoor Defense with Machine Unlearning , 2022, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications.
[21] Guangxuan Xu,et al. Can Model Compression Improve NLP Fairness , 2022, ArXiv.
[22] Hatice Gunes,et al. The Effect of Model Compression on Fairness in Facial Expression Recognition , 2022, ICPR Workshops.
[23] Song Guo,et al. Federated Unlearning via Class-Discriminative Pruning , 2021, WWW.
[24] Nicolas Papernot,et al. On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning , 2021, USENIX Security Symposium.
[25] Nicolas Papernot,et al. Unrolling SGD: Understanding Factors Influencing Machine Unlearning , 2021, 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P).
[26] Benjamin I. P. Rubinstein,et al. Hard to Forget: Poisoning Attacks on Certified Machine Unlearning , 2021, AAAI.
[27] Alexander Warnecke,et al. Machine Unlearning of Features and Labels , 2021, NDSS.
[28] Zhangyang Wang,et al. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? , 2021, NeurIPS.
[29] B. Kailkhura,et al. A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness , 2021, NeurIPS.
[30] Ehsan Adeli,et al. Scalable Differential Privacy with Sparse Network Finetuning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Aleksander Madry,et al. Leveraging Sparse Linear Layers for Debuggable Deep Networks , 2021, ICML.
[32] Yang Zhang,et al. Graph Unlearning , 2021, CCS.
[33] Ananda Theertha Suresh,et al. Remember What You Want to Forget: Algorithms for Machine Unlearning , 2021, NeurIPS.
[34] Ryan A. Rossi,et al. Machine Unlearning via Algorithmic Stability , 2021, COLT.
[35] Shiyu Chang,et al. The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Vijay Ganesh,et al. Amnesiac Machine Learning , 2020, AAAI.
[37] Daniel M. Roy,et al. Pruning Neural Networks at Initialization: Why are We Missing the Mark? , 2020, International Conference on Learning Representations.
[38] Hang Liu,et al. Against Membership Inference Attack: Pruning is All You Need , 2020, IJCAI.
[39] Seth Neel,et al. Descent-to-Delete: Gradient-Based Methods for Machine Unlearning , 2020, ALT.
[40] Daniel L. K. Yamins,et al. Pruning neural networks without any data by iteratively conserving synaptic flow , 2020, NeurIPS.
[41] Andreas Krause,et al. Coresets via Bilevel Optimization for Continual Learning and Streaming , 2020, NeurIPS.
[42] Sidak Pal Singh,et al. WoodFisher: Efficient Second-Order Approximation for Neural Network Compression , 2020, NeurIPS.
[43] Liwei Song,et al. Systematic Evaluation of Privacy Risks of Machine Learning Models , 2020, USENIX Security Symposium.
[44] Yifan Gong,et al. A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration Framework , 2020, ACM Great Lakes Symposium on VLSI.
[45] Jose Javier Gonzalez Ortiz,et al. What is the State of Neural Network Pruning? , 2020, MLSys.
[46] Kai Li,et al. Privacy-preserving Learning via Deep Net Pruning , 2020, ArXiv.
[47] Kamalika Chaudhuri,et al. Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations , 2020, AISTATS.
[48] S. Jana,et al. HYDRA: Pruning Adversarially Robust Neural Networks , 2020, NeurIPS.
[49] Guodong Zhang,et al. Picking Winning Tickets Before Training by Preserving Gradient Flow , 2020, ICLR.
[50] Daniel M. Roy,et al. Linear Mode Connectivity and the Lottery Ticket Hypothesis , 2019, ICML.
[51] Christopher A. Choquette-Choo,et al. Machine Unlearning , 2019, 2021 IEEE Symposium on Security and Privacy (SP).
[52] Aaron C. Courville,et al. What Do Compressed Deep Neural Networks Forget , 2019, 1911.05248.
[53] Stefano Soatto,et al. Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[54] L. V. D. Maaten,et al. Certified Data Removal from Machine Learning Models , 2019, ICML.
[55] James Zou,et al. Making AI Forget You: Data Deletion in Machine Learning , 2019, NeurIPS.
[56] Prateek Mittal,et al. Privacy Risks of Securing Machine Learning Models against Adversarial Examples , 2019, CCS.
[57] C. Hoofnagle,et al. The European Union general data protection regulation: what it is and what it means* , 2019, Information & Communications Technology Law.
[58] Philip H. S. Torr,et al. SNIP: Single-shot Network Pruning based on Connection Sensitivity , 2018, ICLR.
[59] Trevor Darrell,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[60] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[61] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.
[62] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[63] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[64] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[65] Anoop Cherian,et al. On Differentiating Parameterized Argmin and Argmax Problems with Application to Bi-level Optimization , 2016, ArXiv.
[66] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[68] Junfeng Yang,et al. Towards Making Systems Forget with Machine Unlearning , 2015, 2015 IEEE Symposium on Security and Privacy.
[69] Lee A. Bygrave,et al. A right to be forgotten? , 2014, Commun. ACM.
[70] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[71] C. V. Jawahar,et al. Cats and dogs , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[72] Julien Mairal,et al. Optimization with Sparsity-Inducing Penalties , 2011, Found. Trends Mach. Learn..
[73] Krista A. Ehinger,et al. SUN database: Large-scale scene recognition from abbey to zoo , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[74] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[75] C. Dwork,et al. Our Data, Ourselves: Privacy Via Distributed Noise Generation , 2006, EUROCRYPT.
[76] S. Weisberg,et al. Residuals and Influence in Regression , 1982 .
[77] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[78] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[79] Zhenyu (Allen) Zhang,et al. Can You Win Everything with A Lottery Ticket? , 2022, Trans. Mach. Learn. Res..