Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive Optimization

Warning: this paper contains model outputs exhibiting offensiveness and biases. Recently pre-trained language models (PLMs) have prospered in various natural language generation (NLG) tasks due to their ability to generate fairly fluent text. Nevertheless, these models are observed to capture and reproduce harmful contents in training corpora, typically toxic language and social biases, raising severe moral issues. Prior works on ethical NLG tackle detoxifying and debiasing separately, which is problematic since we find debiased models still exhibit toxicity while detoxified ones even exacerbate social biases. To address such a challenge, we propose the first unified framework of detoxifying and debiasing called UDDIA, which jointly formalizes these two problems as rectifying the output space. We theoretically interpret our framework as learning a text distribution mixing weighted attributes. Besides, UDDIA conducts adaptive optimization of only a few parameters during decoding based on a parameter-efficient tuning schema without any training data. This leads to minimal generation quality loss and improved rectification performance with acceptable computational cost. Experimental results demonstrate that compared to several strong baselines, UDDIA achieves debiasing and detoxifying simultaneously and better balances efficiency and effectiveness, taking a further step towards practical ethical NLG.

[1]  Kai-Wei Chang,et al.  An Analysis of The Effects of Decoding Algorithms on Fairness in Open-Ended Language Generation , 2022, 2022 IEEE Spoken Language Technology Workshop (SLT).

[2]  Afra Feyza Akyurek,et al.  Challenges in Measuring Bias via Open-Ended Language Generation , 2022, GEBNLP.

[3]  Jie Zhou,et al.  CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation , 2022, ACL.

[4]  Yoav Goldberg,et al.  Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space , 2022, EMNLP.

[5]  Zonghan Yang,et al.  On Robust Prefix-Tuning for Text Classification , 2022, ICLR.

[6]  Haitao Zheng,et al.  Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models , 2022, ArXiv.

[7]  Rajas Bansal A Survey on Bias and Fairness in Natural Language Processing , 2022, ArXiv.

[8]  Li Dong,et al.  Controllable Natural Language Generation with Contrastive Prefixes , 2022, FINDINGS.

[9]  Dani Yogatama,et al.  A Contrastive Framework for Neural Text Generation , 2022, NeurIPS.

[10]  M. Shoeybi,et al.  Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models , 2022, NeurIPS.

[11]  Po-Sen Huang,et al.  Ethical and social risks of harm from Language Models , 2021, ArXiv.

[12]  Po-Sen Huang,et al.  Scaling Language Models: Methods, Analysis & Insights from Training Gopher , 2021, ArXiv.

[13]  Noah A. Smith,et al.  Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection , 2021, NAACL.

[14]  Varvara Logacheva,et al.  Text Detoxification using Large Pre-trained Neural Models , 2021, EMNLP.

[15]  Po-Sen Huang,et al.  Challenges in Detoxifying Language Models , 2021, EMNLP.

[16]  Michael S. Bernstein,et al.  On the Opportunities and Risks of Foundation Models , 2021, ArXiv.

[17]  Ruslan Salakhutdinov,et al.  Towards Understanding and Mitigating Social Biases in Language Models , 2021, ICML.

[18]  Yoav Goldberg,et al.  BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models , 2021, ACL.

[19]  Kai-Wei Chang,et al.  “Nice Try, Kiddo”: Investigating Ad Hominems in Dialogue Responses , 2021, NAACL.

[20]  Kai-Wei Chang,et al.  Societal Biases in Language Generation: Progress and Challenges , 2021, ACL.

[21]  Yejin Choi,et al.  DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts , 2021, ACL.

[22]  B. Byrne,et al.  First the Worst: Finding Better Gender Translations During Beam Search , 2021, FINDINGS.

[23]  Dan Klein,et al.  Detoxifying Language Models Risks Marginalizing Minority Voices , 2021, NAACL.

[24]  Timo Schick,et al.  Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP , 2021, Transactions of the Association for Computational Linguistics.

[25]  Yejin Choi,et al.  MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers , 2021, NeurIPS.

[26]  Yejin Choi,et al.  Challenges in Automated Debiasing for Toxic Language Detection , 2021, EACL.

[27]  Kai-Wei Chang,et al.  BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation , 2021, FAccT.

[28]  Mark O. Riedl,et al.  Reducing Non-Normative Text Generation from Language Models , 2020, INLG.

[29]  William Yang Wang,et al.  Dats Wassup!!: Investigating African-American Vernacular English in Transformer-Based Text Generation , 2020, EMNLP.

[30]  Hui Liu,et al.  Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning , 2020, EMNLP.

[31]  Yejin Choi,et al.  RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models , 2020, FINDINGS.

[32]  James R. Glass,et al.  A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation , 2020, AACL.

[33]  Shafiq R. Joty,et al.  GeDi: Generative Discriminator Guided Sequence Generation , 2020, EMNLP.

[34]  Iryna Gurevych,et al.  AdapterHub: A Framework for Adapting Transformers , 2020, EMNLP.

[35]  Ruslan Salakhutdinov,et al.  Towards Debiasing Sentence Representations , 2020, ACL.

[36]  Solon Barocas,et al.  Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.

[37]  Nanyun Peng,et al.  Towards Controllable Biases in Language Generation , 2020, FINDINGS.

[38]  Doug Downey,et al.  Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.

[39]  Siva Reddy,et al.  StereoSet: Measuring stereotypical bias in pretrained language models , 2020, ACL.

[40]  Bill Byrne,et al.  Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem , 2020, ACL.

[41]  Po-Sen Huang,et al.  Reducing Sentiment Bias in Language Models via Counterfactual Evaluation , 2019, FINDINGS.

[42]  Jianfeng Gao,et al.  DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation , 2019, ACL.

[43]  Vicente Ordonez,et al.  Bias and Fairness in Natural Language Processing , 2019, EMNLP/IJCNLP.

[44]  Omer Levy,et al.  BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.

[45]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[46]  Jiliang Tang,et al.  Does Gender Matter? Towards Fairness in Dialogue Systems , 2019, COLING.

[47]  J. Yosinski,et al.  Plug and Play Language Models: A Simple Approach to Controlled Text Generation , 2019, ICLR.

[48]  Christopher D. Manning,et al.  Do Massively Pretrained Language Models Make Better Storytellers? , 2019, CoNLL.

[49]  Nanyun Peng,et al.  The Woman Worked as a Babysitter: On Biases in Language Generation , 2019, EMNLP.

[50]  Kristina Lerman,et al.  A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..

[51]  Jason Weston,et al.  Neural Text Generation with Unlikelihood Training , 2019, ICLR.

[52]  Silvia Chiappa,et al.  Wasserstein Fair Classification , 2019, UAI.

[53]  Ryan Cotterell,et al.  Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology , 2019, ACL.

[54]  Yusu Qian,et al.  Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function , 2019, ACL.

[55]  Yejin Choi,et al.  The Curious Case of Neural Text Degeneration , 2019, ICLR.

[56]  Shikha Bordia,et al.  Identifying and Reducing Gender Bias in Word-Level Language Models , 2019, NAACL.

[57]  Mona Attariyan,et al.  Parameter-Efficient Transfer Learning for NLP , 2019, ICML.

[58]  Lili Mou,et al.  Disentangled Representation Learning for Non-Parallel Text Style Transfer , 2018, ACL.

[59]  Anupam Datta,et al.  Gender Bias in Neural Natural Language Processing , 2018, Logic, Language, and Security.

[60]  Sérgio Nunes,et al.  A Survey on Automatic Detection of Hate Speech in Text , 2018, ACM Comput. Surv..

[61]  Sebastian Ruder,et al.  Universal Language Model Fine-tuning for Text Classification , 2018, ACL.

[62]  Frank Hutter,et al.  Decoupled Weight Decay Regularization , 2017, ICLR.

[63]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[64]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[65]  M. Mukaka,et al.  Statistics corner: A guide to appropriate use of correlation coefficient in medical research. , 2012, Malawi medical journal : the journal of Medical Association of Malawi.

[66]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[67]  Alexandra Lazăr Delta , 2010, Encyclopedic Dictionary of Archaeology.

[68]  Hanna M. Wallach,et al.  Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets , 2021, ACL.

[69]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[70]  Alec Radford,et al.  Improving Language Understanding by Generative Pre-Training , 2018 .

[71]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.