Automatic Prompt Optimization with "Gradient Descent" and Beam Search

Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language ``gradients'' that criticize the current prompt. The gradients are then ``propagated'' into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31\%, by using data to rewrite vague task descriptions into more precise annotation instructions.

[1]  Richmond Y. Wong,et al.  Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts , 2023, CHI.

[2]  Xinyun Chen,et al.  Teaching Large Language Models to Self-Debug , 2023, ArXiv.

[3]  Marco Tulio Ribeiro,et al.  Sparks of Artificial General Intelligence: Early experiments with GPT-4 , 2023, ArXiv.

[4]  Mohit Bansal,et al.  GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models , 2022, EACL.

[5]  Noah A. Smith,et al.  Self-Instruct: Aligning Language Model with Self Generated Instructions , 2022, ArXiv.

[6]  Li Dong,et al.  Optimizing Prompts for Text-to-Image Generation , 2022, NeurIPS.

[7]  Jimmy Ba,et al.  Large Language Models Are Human-Level Prompt Engineers , 2022, ICLR.

[8]  Zhilin Yang,et al.  GPS: Genetic Prompt Search for Efficient Few-Shot Learning , 2022, EMNLP.

[9]  Yihan Wang,et al.  RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning , 2022, EMNLP.

[10]  Samuel R. Bowman,et al.  Instruction Induction: From Few Examples to Natural Language Task Descriptions , 2022, ACL.

[11]  Carrie J. Cai,et al.  PromptMaker: Prompt-based Prototyping with Large Language Models , 2022, CHI Extended Abstracts.

[12]  Adrian S. Wong,et al.  Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language , 2022, ICLR.

[13]  Cherepanov,et al.  Competition-level code generation with AlphaCode , 2022, Science.

[14]  S. Riedel,et al.  Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity , 2021, ACL.

[15]  Brian Lester,et al.  The Power of Scale for Parameter-Efficient Prompt Tuning , 2021, EMNLP.

[16]  Guanghui Qin,et al.  Learning How to Ask: Querying LMs with Mixtures of Soft Prompts , 2021, NAACL.

[17]  Laria Reynolds,et al.  Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm , 2021, CHI Extended Abstracts.

[18]  Danqi Chen,et al.  Making Pre-trained Language Models Better Few-shot Learners , 2021, ACL.

[19]  Karen Hambardzumyan,et al.  WARP: Word-level Adversarial ReProgramming , 2021, ACL.

[20]  Walid Magdy,et al.  From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset , 2020, OSACT.

[21]  William Yang Wang “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection , 2017, ACL.

[22]  Doina Precup,et al.  Algorithms for multi-armed bandit problems , 2014, ArXiv.

[23]  Oren Somekh,et al.  Almost Optimal Exploration in Multi-Armed Bandits , 2013, ICML.

[24]  Sébastien Bubeck,et al.  Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..

[25]  R. Munos,et al.  Best Arm Identification in Multi-Armed Bandits , 2010, COLT.

[26]  Steven Bird,et al.  NLTK: The Natural Language Toolkit , 2002, ACL.