Aligning Large Language Models through Synthetic Feedback

Aligning large language models (LLMs) to human values has become increasingly important as it enables sophisticated steering of LLMs, e.g., making them follow given instructions while keeping them less toxic. However, it requires a significant amount of human demonstrations and feedback. Recently, open-sourced models have attempted to replicate the alignment learning process by distilling data from already aligned LLMs like InstructGPT or ChatGPT. While this process reduces human efforts, constructing these datasets has a heavy dependency on the teacher models. In this work, we propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM for simulating high-quality demonstrations to train a supervised policy and for further optimizing the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are trained on the outputs of InstructGPT or human-annotated instructions. Our 7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as the judge with about 75% winning rate on average.

[1]  Zhi Rui Tam,et al.  OpenAssistant Conversations - Democratizing Large Language Model Alignment , 2023, ArXiv.

[2]  Songfang Huang,et al.  RRHF: Rank Responses to Align Language Models with Human Feedback without tears , 2023, ArXiv.

[3]  Chunyuan Li,et al.  Instruction Tuning with GPT-4 , 2023, ArXiv.

[4]  Jon Ander Campos,et al.  Training Language Models with Language Feedback at Scale , 2023, ArXiv.

[5]  Ethan Perez,et al.  Pretraining Language Models with Human Preferences , 2023, ArXiv.

[6]  P. Abbeel,et al.  Chain of Hindsight Aligns Language Models with Feedback , 2023, ArXiv.

[7]  Noah A. Smith,et al.  Self-Instruct: Aligning Language Model with Self Generated Instructions , 2022, ArXiv.

[8]  Lisa Anne Hendricks,et al.  Improving alignment of dialogue agents via targeted human judgements , 2022, ArXiv.

[9]  Gerard de Melo,et al.  Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models , 2022, ArXiv.

[10]  Tom B. Brown,et al.  Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback , 2022, ArXiv.

[11]  Ryan J. Lowe,et al.  Training language models to follow instructions with human feedback , 2022, NeurIPS.

[12]  Owain Evans,et al.  TruthfulQA: Measuring How Models Mimic Human Falsehoods , 2021, ACL.

[13]  Soroush Vosoughi,et al.  Aligning Generative Language Models with Human Values , 2022, NAACL-HLT.

[14]  Jeff Wu,et al.  WebGPT: Browser-assisted question-answering with human feedback , 2021, ArXiv.

[15]  Dario Amodei,et al.  A General Language Assistant as a Laboratory for Alignment , 2021, ArXiv.

[16]  Dawn Song,et al.  Measuring Massive Multitask Language Understanding , 2020, ICLR.

[17]  Ryan J. Lowe,et al.  Learning to summarize from human feedback , 2020, NeurIPS 2020.

[18]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[19]  Lysandre Debut,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[20]  Yejin Choi,et al.  The Curious Case of Neural Text Degeneration , 2019, ICLR.

[21]  Tom B. Brown,et al.  Fine-Tuning Language Models from Human Preferences , 2019, ArXiv.

[22]  Arthur Francis,et al.  DOLLY , 2019, Los llorones.

[23]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[24]  Shane Legg,et al.  Deep Reinforcement Learning from Human Preferences , 2017, NIPS.

[25]  Sandro Pezzelle,et al.  The LAMBADA dataset: Word prediction requiring a broad discourse context , 2016, ACL.