Backdoor Attacks to Pre-trained Unified Foundation Models
暂无分享,去创建一个
[1] Nanyang Technological University,et al. A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT , 2023, ArXiv.
[2] Aniruddha Kembhavi,et al. Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks , 2022, ICLR.
[3] Sergio Gomez Colmenarejo,et al. A Generalist Agent , 2022, Trans. Mach. Learn. Res..
[4] Jingren Zhou,et al. OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework , 2022, ICML.
[5] Shangwei Guo,et al. BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models , 2021, ICLR.
[6] Neil Zhenqiang Gong,et al. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning , 2021, 2022 IEEE Symposium on Security and Privacy (SP).
[7] Michael S. Bernstein,et al. On the Opportunities and Risks of Foundation Models , 2021, ArXiv.
[8] Graham Neubig,et al. Weight Poisoning Attacks on Pretrained Models , 2020, ACL.