暂无分享,去创建一个
[1] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[2] Michael Backes,et al. BadNL: Backdoor Attacks Against NLP Models , 2020, ArXiv.
[3] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[4] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[5] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[6] Preslav Nakov,et al. SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) , 2019, *SEMEVAL.
[7] Graham Neubig,et al. Weight Poisoning Attacks on Pretrained Models , 2020, ACL.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[10] Yong Jiang,et al. Backdoor Learning: A Survey , 2020, IEEE transactions on neural networks and learning systems.
[11] Jiazhu Dai,et al. Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification , 2020, ArXiv.
[12] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[13] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[14] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.