Skill-Based Few-Shot Selection for In-Context Learning
暂无分享,去创建一个
Weizhu Chen | Jian-Guang Lou | Zeqi Lin | Nanning Zheng | Qiang Fu | B. Chen | Shengnan An | Bo Zhou
[1] D. Zhang,et al. How Do In-Context Examples Affect Compositional Generalization? , 2023, ACL.
[2] Fei Huang,et al. Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs , 2023, ArXiv.
[3] Davood Rafiei,et al. DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction , 2023, NeurIPS.
[4] Song-Chun Zhu,et al. Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models , 2023, ArXiv.
[5] Xuming Hu,et al. A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability , 2023, ArXiv.
[6] Xipeng Qiu,et al. Finding Support Examples for In-Context Learning , 2023, EMNLP.
[7] Eric Wong,et al. In-context Example Selection with Influences , 2023, ArXiv.
[8] Cuiping Li,et al. RESDSQL: Decoupling Schema Linking and Skeleton Parsing for Text-to-SQL , 2023, AAAI.
[9] Lingpeng Kong,et al. Compositional Exemplars for In-context Learning , 2023, ICML.
[10] Luke Zettlemoyer,et al. Toolformer: Language Models Can Teach Themselves to Use Tools , 2023, NeurIPS.
[11] Michihiro Yasunaga,et al. Is ChatGPT a General-Purpose Natural Language Processing Task Solver? , 2023, EMNLP.
[12] William Yang Wang,et al. Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness , 2023, ICLR.
[13] Lingpeng Kong,et al. Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering , 2022, ACL.
[14] Jonathan Berant,et al. Diverse Demonstrations Improve In-context Compositional Generalization , 2022, ACL.
[15] Greg Durrett,et al. Complementary Explanations for Effective In-Context Learning , 2022, ACL.
[16] Yiming Zhang,et al. Active Example Selection for In-Context Learning , 2022, EMNLP.
[17] Hyung Won Chung,et al. Language Models are Multilingual Chain-of-Thought Reasoners , 2022, ICLR.
[18] Dragomir R. Radev,et al. Binding Language Models in Symbolic Languages , 2022, ICLR.
[19] K. McKeown,et al. On the Relation between Sensitivity and Accuracy in In-context Learning , 2022, EMNLP.
[20] Weizhu Chen,et al. CodeT: Code Generation with Generated Tests , 2022, ICLR.
[21] J. Dean,et al. Emergent Abilities of Large Language Models , 2022, Trans. Mach. Learn. Res..
[22] Ronan Le Bras,et al. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models , 2022, ArXiv.
[23] Michael Pradel,et al. Code Generation Tools (Almost) for Free? A Study of Few-Shot, Pre-Trained Language Models on Code , 2022, ArXiv.
[24] I. Higgins,et al. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning , 2022, ICLR.
[25] Zhouhan Lin,et al. RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL , 2022, EMNLP.
[26] Xi Victoria Lin,et al. OPT: Open Pre-trained Transformer Language Models , 2022, ArXiv.
[27] Andrew M. Dai,et al. PaLM: Scaling Language Modeling with Pathways , 2022, J. Mach. Learn. Res..
[28] Lisa Anne Hendricks,et al. Training Compute-Optimal Large Language Models , 2022, ArXiv.
[29] D. Schuurmans,et al. Self-Consistency Improves Chain of Thought Reasoning in Language Models , 2022, ICLR.
[30] Noah A. Smith,et al. In-Context Learning for Few-Shot Dialogue State Tracking , 2022, EMNLP.
[31] Dzmitry Bahdanau,et al. Evaluating the Text-to-SQL Capabilities of Large Language Models , 2022, ArXiv.
[32] Cherepanov,et al. Competition-level code generation with AlphaCode , 2022, Science.
[33] Reza Yazdani Aminabadi,et al. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model , 2022, ArXiv.
[34] Sumit Gulwani,et al. Synchromesh: Reliable code generation from pre-trained language models , 2022, ICLR.
[35] Peter Welinder,et al. Text and Code Embeddings by Contrastive Pre-Training , 2022, ArXiv.
[36] Jeff Wu,et al. WebGPT: Browser-assisted question-answering with human feedback , 2021, ArXiv.
[37] Jonathan Berant,et al. Learning To Retrieve Prompts for In-Context Learning , 2021, NAACL.
[38] Po-Sen Huang,et al. Scaling Language Models: Methods, Analysis & Insights from Training Gopher , 2021, ArXiv.
[39] Mohammad Bavarian,et al. Training Verifiers to Solve Math Word Problems , 2021, ArXiv.
[40] Dzmitry Bahdanau,et al. PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models , 2021, EMNLP.
[41] Wojciech Zaremba,et al. Evaluating Large Language Models Trained on Code , 2021, ArXiv.
[42] Matthew Richardson,et al. KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers , 2021, ACL.
[43] S. Riedel,et al. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity , 2021, ACL.
[44] Dan Klein,et al. Constrained Language Models Yield Few-Shot Semantic Parsers , 2021, EMNLP.
[45] D. Klein,et al. Calibrate Before Use: Improving Few-Shot Performance of Language Models , 2021, ICML.
[46] Weizhu Chen,et al. What Makes Good In-Context Examples for GPT-3? , 2021, DEELIO.
[47] Danqi Chen,et al. Making Pre-trained Language Models Better Few-shot Learners , 2021, ACL.
[48] Richard Socher,et al. Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing , 2020, FINDINGS.
[49] Tal Linzen,et al. COGS: A Compositional Generalization Challenge Based on Semantic Interpretation , 2020, EMNLP.
[50] Sida I. Wang,et al. Grounded Adaptation for Zero-shot Executable Semantic Parsing , 2020, EMNLP.
[51] Dawn Song,et al. Measuring Massive Multitask Language Understanding , 2020, ICLR.
[52] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[53] Xiaodong Liu,et al. RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers , 2019, ACL.
[54] Iryna Gurevych,et al. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks , 2019, EMNLP.
[55] Yan Gao,et al. Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation , 2019, ACL.
[56] Tao Yu,et al. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task , 2018, EMNLP.
[57] Tao Yu,et al. TypeSQL: Knowledge-Based Type-Aware Neural Text-to-SQL Generation , 2018, NAACL.
[58] Dawn Xiaodong Song,et al. SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning , 2017, ArXiv.
[59] Silvio Savarese,et al. Active Learning for Convolutional Neural Networks: A Core-Set Approach , 2017, ICLR.
[60] Mirella Lapata,et al. Language to Logical Form with Neural Attention , 2016, ACL.
[61] Luke S. Zettlemoyer,et al. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars , 2005, UAI.
[62] Rohit J. Kate,et al. Learning to Transform Natural to Formal Languages , 2005, AAAI.
[63] Raymond J. Mooney,et al. Learning to Parse Database Queries Using Inductive Logic Programming , 1996, AAAI/IAAI, Vol. 2.
[64] Ge Li,et al. Towards Enhancing In-Context Learning for Code Generation , 2023, ArXiv.
[65] Xu Tan,et al. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face , 2023, NeurIPS.
[66] Greg Durrett,et al. Explanation Selection Using Unlabeled Data for In-Context Learning , 2023, ArXiv.
[67] Weizhu Chen,et al. On the Advance of Making Language Models Better Reasoners , 2022, ArXiv.
[68] Ellie Pavlick,et al. Mapping Language Models to Grounded Conceptual Spaces , 2022, ICLR.
[69] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .