Self-QA: Unsupervised Knowledge Guided Language Model Alignment
暂无分享,去创建一个
[1] Yiming Yang,et al. Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision , 2023, NeurIPS.
[2] Chunyuan Li,et al. Instruction Tuning with GPT-4 , 2023, ArXiv.
[3] Julian McAuley,et al. Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data , 2023, ArXiv.
[4] Noah A. Smith,et al. Self-Instruct: Aligning Language Model with Self Generated Instructions , 2022, ArXiv.
[5] Alexander M. Rush,et al. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model , 2022, ArXiv.
[6] Xuanyu Zhang,et al. TranS: Transition-based Knowledge Graph Embedding with Synthetic Relation Representation , 2022, EMNLP.
[7] Ryan J. Lowe,et al. Training language models to follow instructions with human feedback , 2022, NeurIPS.
[8] Zhichun Wang,et al. Rception: Wide and Deep Interaction Networks for Machine Reading Comprehension (Student Abstract) , 2020, AAAI.
[9] Xuanyu Zhang,et al. MCˆ2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension , 2019, ACL.
[10] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[11] Christina A. Rouse,et al. The Effects of Self-Questioning on Reading Comprehension: A Literature Review , 2016 .