暂无分享,去创建一个
Zhijian Ou | Junlan Feng | Hong Liu | Yucheng Cai | Zhenru Lin | Yi Huang
[1] Xiaojin Zhu,et al. --1 CONTENTS , 2006 .
[2] Ivan Vulić,et al. Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems , 2019, EMNLP.
[3] Min-Yen Kan,et al. Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures , 2018, ACL.
[4] Xiaojun Quan,et al. UBAR: Towards Fully End-to-End Task-Oriented Dialog Systems with GPT-2 , 2020, ArXiv.
[5] Gökhan Tür,et al. Flexibly-Structured Model for Task-Oriented Dialogues , 2019, SIGdial.
[6] Hua Wu,et al. PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable , 2020, ACL.
[7] Tsung-Hsien Wen,et al. Latent Intention Dialogue Models , 2017, ICML.
[8] Zhijian Ou,et al. A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning , 2020, EMNLP.
[9] Stefan Ultes,et al. MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling , 2018, EMNLP.
[10] Tsung-Hsien Wen,et al. Neural Belief Tracker: Data-Driven Dialogue State Tracking , 2016, ACL.
[11] Kee-Eung Kim,et al. End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2 , 2020, ACL.
[12] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[13] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[14] Bing Liu,et al. An End-to-End Trainable Neural Network Model with Belief Tracking for Task-Oriented Dialog , 2017, INTERSPEECH.
[15] Ondrej Dusek,et al. AuGPT: Dialogue with Pre-trained Language Models and Data Augmentation , 2021, ArXiv.
[16] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[17] Zheng Zhang,et al. CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset , 2020, Transactions of the Association for Computational Linguistics.
[18] Zhijian Ou,et al. Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context , 2019, AAAI.
[19] Nurul Lubis,et al. TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking , 2020, SIGdial.
[20] Baolin Peng,et al. Soloist: Building Task Bots at Scale with Transfer Learning and Machine Teaching , 2021, Transactions of the Association for Computational Linguistics.
[21] Maxine Eskénazi,et al. Rethinking Action Spaces for Reinforcement Learning in End-to-end Dialog Agents with Latent Variable Models , 2019, NAACL.
[22] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[23] Maxine Eskénazi,et al. Structured Fusion Networks for Dialog , 2019, SIGdial.
[24] Mihail Eric,et al. MultiWOZ 2. , 2019 .
[25] Byeongchang Kim,et al. Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue , 2020, ICLR.
[26] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[27] David Vandyke,et al. A Network-based End-to-End Trainable Task-oriented Dialogue System , 2016, EACL.
[28] Zhijian Ou,et al. Paraphrase Augmented Task-Oriented Dialog Generation , 2020, ACL.
[29] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[30] Richard Socher,et al. A Simple Language Model for Task-Oriented Dialogue , 2020, NeurIPS.
[31] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[32] Zhaochun Ren,et al. Explicit State Tracking with Semi-Supervisionfor Neural Dialogue Generation , 2018, CIKM.
[33] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .