暂无分享,去创建一个
Alec Radford | Jeff Wu | Miles Brundage | Ariel Herbert-Voss | Jasmine Wang | Jack Clark | Amanda Askell | Irene Solaiman | Jeff Wu | Alec Radford | Amanda Askell | Ariel Herbert-Voss | Jack Clark | Miles Brundage | Irene Solaiman | Jasmine Wang
[1] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[2] Dirk Hovy,et al. The Social Impact of Natural Language Processing , 2016, ACL.
[3] Perry R. Hinton. Implicit stereotypes and the predictive brain: cognition and culture in “biased” person perception , 2017, Palgrave Communications.
[4] R. Guilbeault. No . 2017 . 5 Computational Propaganda in the United States of America : Manufacturing Consensus Online , 2017 .
[5] Imran Awan. Cyber-Extremism: Isis and the Power of Social Media , 2017, Society.
[6] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[7] Sheng Yu,et al. Generation of Synthetic Electronic Medical Record Text , 2018, 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).
[8] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[9] K. Cox,et al. Social Media in Africa: A Double-Edged Sword for Security and Development , 2018 .
[10] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[11] Jess Whittlestone,et al. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions , 2019, AIES.
[12] Peter Szolovits,et al. Clinically Accurate Chest X-Ray Report Generation , 2019, MLHC.
[13] Jason Weston,et al. What makes a good conversation? How controllable attributes affect human judgments , 2019, NAACL.
[14] Alexander M. Rush,et al. GLTR: Statistical Detection and Visualization of Generated Text , 2019, ACL.
[15] Jayadev Bhaskaran,et al. Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[16] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[17] Dimitrios Alikaniotis,et al. The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction , 2019, BEA@ACL.
[18] Ivan Vulić,et al. Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems , 2019, EMNLP.
[19] Ali Farhadi,et al. Defending Against Neural Fake News , 2019, NeurIPS.
[20] Natalia Criado,et al. Attesting Biases and Discrimination using Language Semantics , 2019, ArXiv.
[21] Kyle Lo,et al. SciBERT: Pretrained Contextualized Embeddings for Scientific Text , 2019, ArXiv.
[22] Erik T. Mueller,et al. Multi-turn Dialogue Response Generation with Autoregressive Transformer Models , 2019, ArXiv.
[23] Cao Xiao,et al. EEGtoText: Learning to Write Medical Reports from EEG Recordings , 2019, MLHC.
[24] Filippo Menczer,et al. Arming the public with artificial intelligence to counter social bots , 2019, Human Behavior and Emerging Technologies.
[25] Jess Whittlestone,et al. Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning , 2019, ArXiv.
[26] Marc'Aurelio Ranzato,et al. Real or Fake? Learning to Discriminate Machine from Human Generated Text , 2019, ArXiv.
[27] Junichi Yamagishi,et al. Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection , 2019, AINA.
[28] Rik van Noord,et al. Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor , 2019, CL.
[29] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[30] Iyad Rahwan,et al. Human detection of machine-manipulated media , 2019, Commun. ACM.
[31] Mark Amerika. Talk to Transformer , 2021 .