PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model
暂无分享,去创建一个
[1] William S. Peebles,et al. Scalable Diffusion Models with Transformers , 2022, 2023 IEEE/CVF International Conference on Computer Vision (ICCV).
[2] Kilian Q. Weinberger,et al. Latent Diffusion for Language Generation , 2022, NeurIPS.
[3] G. Hua,et al. Exploring Discrete Diffusion Models for Image Captioning , 2022, ArXiv.
[4] Lingpeng Kong,et al. DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models , 2022, ICLR.
[5] Geoffrey E. Hinton,et al. Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning , 2022, ICLR.
[6] Jonathan Ho. Classifier-Free Diffusion Guidance , 2022, ArXiv.
[7] Xiaojiang Liu,et al. Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation , 2022, NeurIPS.
[8] Xiang Lisa Li,et al. Diffusion-LM Improves Controllable Text Generation , 2022, NeurIPS.
[9] David J. Fleet,et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding , 2022, NeurIPS.
[10] Bowen Jing,et al. Subspace Diffusion Generative Models , 2022, ECCV.
[11] B. Ommer,et al. High-Resolution Image Synthesis with Latent Diffusion Models , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Supasorn Suwajanakorn,et al. Diffusion Autoencoders: Toward a Meaningful and Decodable Representation , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Alexey A. Gritsenko,et al. Autoregressive Diffusion Models , 2021, ICLR.
[14] Rianne van den Berg,et al. Structured Denoising Diffusion Models in Discrete State-Spaces , 2021, NeurIPS.
[15] Diederik P. Kingma,et al. Variational Diffusion Models , 2021, ArXiv.
[16] Abhishek Kumar,et al. Score-Based Generative Modeling through Stochastic Differential Equations , 2020, ICLR.
[17] Jiaming Song,et al. Denoising Diffusion Implicit Models , 2020, ICLR.
[18] Pieter Abbeel,et al. Denoising Diffusion Probabilistic Models , 2020, NeurIPS.
[19] Xiujun Li,et al. Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space , 2020, EMNLP.
[20] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[21] Lav R. Varshney,et al. CTRL: A Conditional Transformer Language Model for Controllable Generation , 2019, ArXiv.
[22] Jason Weston,et al. Neural Text Generation with Unlikelihood Training , 2019, ICLR.
[23] Yang Song,et al. Generative Modeling by Estimating Gradients of the Data Distribution , 2019, NeurIPS.
[24] Sergey Levine,et al. Causal Confusion in Imitation Learning , 2019, NeurIPS.
[25] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[26] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[27] Alexander M. Rush,et al. A Tutorial on Deep Latent Variable Models of Natural Language , 2018, ArXiv.
[28] Zhe Gan,et al. Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization , 2018, NeurIPS.
[29] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[30] Matt Post,et al. A Call for Clarity in Reporting BLEU Scores , 2018, WMT.
[31] Marc'Aurelio Ranzato,et al. Analyzing Uncertainty in Neural Machine Translation , 2018, ICML.
[32] Lei Zheng,et al. Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.
[33] Zhi Chen,et al. Adversarial Feature Matching for Text Generation , 2017, ICML.
[34] Eric P. Xing,et al. Toward Controlled Generation of Text , 2017, ICML.
[35] Aaron C. Courville,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[36] Weinan Zhang,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[37] Jianfeng Gao,et al. Deep Reinforcement Learning for Dialogue Generation , 2016, EMNLP.
[38] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[39] Jianfeng Gao,et al. A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.
[40] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[41] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[42] Claire Cardie,et al. Towards a General Rule for Identifying Deceptive Opinion Spam , 2014, ACL.
[43] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[44] Ronald J. Williams,et al. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.
[45] Weizhu Chen,et al. GENIE : Large Scale Pre-training for Generation with Diffusion Model , 2022 .