Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery
暂无分享,去创建一个
[1] Mohit Bansal,et al. GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models , 2022, EACL.
[2] Luke Zettlemoyer,et al. Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? , 2022, EMNLP.
[3] Gabriel Ilharco,et al. Reproducible Scaling Laws for Contrastive Language-Image Learning , 2022, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Andrew M. Dai,et al. Scaling Instruction-Finetuned Language Models , 2022, ArXiv.
[5] Ludwig Schmidt,et al. LAION-5B: An open large-scale dataset for training next generation image-text models , 2022, NeurIPS.
[6] Amit H. Bermano,et al. An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion , 2022, ICLR.
[7] Biswajit Paria,et al. Gradient-based Constrained Sampling from Language Models , 2022, EMNLP.
[8] Yihan Wang,et al. RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning , 2022, EMNLP.
[9] Xi Victoria Lin,et al. OPT: Open Pre-trained Transformer Language Models , 2022, ArXiv.
[10] S. Hoi,et al. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation , 2022, ICML.
[11] B. Ommer,et al. High-Resolution Image Synthesis with Latent Diffusion Models , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Yejin Choi,et al. Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts , 2021, NAACL.
[13] Xiaowei Hu,et al. Scaling Up Vision-Language Pretraining for Image Captioning , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Maosong Sun,et al. OpenPrompt: An Open-source Framework for Prompt-learning , 2021, ACL.
[15] Alexander M. Rush,et al. Multitask Prompted Training Enables Zero-Shot Task Generalization , 2021, ICLR.
[16] Yejin Choi,et al. VinVL: Revisiting Visual Representations in Vision-Language Models , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Brian Lester,et al. The Power of Scale for Parameter-Efficient Prompt Tuning , 2021, EMNLP.
[18] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[19] Percy Liang,et al. Prefix-Tuning: Optimizing Continuous Prompts for Generation , 2021, ACL.
[20] Sameer Singh,et al. Eliciting Knowledge from Language Models Using Automatically Generated Prompts , 2020, EMNLP.
[21] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[22] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[23] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[24] Noam Shazeer,et al. Adafactor: Adaptive Learning Rates with Sublinear Memory Cost , 2018, ICML.
[25] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[26] Hanan Samet,et al. Training Quantized Nets: A Deeper Understanding , 2017, NIPS.
[27] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[28] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[29] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[30] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[31] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[32] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[33] Jure Leskovec,et al. Hidden factors and hidden topics: understanding rating dimensions with review text , 2013, RecSys.
[34] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[35] Danqi Chen,et al. of the Association for Computational Linguistics: , 2001 .