暂无分享,去创建一个
Elias Benussi | Yuki M. Asano | Yennie Jun | Haider Iqbal | Filippo Volpin | Frederic A. Dreyer | Aleksandar Shtedritski | Hannah Rose Kirk | Hannah Kirk | F. Dreyer | Aleksandar Shtedritski | Elias Benussi | Yennie Jun | Haider Iqbal | Filippo Volpin
[1] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[2] Rajiv Ratn Shah,et al. End-to-End Resume Parsing and Finding Candidates for a Job Description using BERT , 2019, ArXiv.
[3] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[4] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[5] Solon Barocas,et al. Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.
[6] Emily M. Bender,et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 , 2021, FAccT.
[7] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.