Bringing the State-of-the-Art to Customers: A Neural Agent Assistant Framework for Customer Service Support

Building Agent Assistants that can help improve customer service support requires inputs from industry users and their customers, as well as knowledge about state-of-the-art Natural Language Processing (NLP) technology. We combine expertise from academia and industry to bridge the gap and build task/domain-specific Neural Agent Assistants (NAA) with three high-level components for: (1) Intent Identification, (2) Context Retrieval, and (3) Response Generation. In this paper, we outline the pipeline of the NAA's core system and also present three case studies in which three industry partners successfully adapt the framework to find solutions to their unique challenges. Our findings suggest that a collaborative process is instrumental in spurring the development of emerging NLP models for Conversational AI tasks in industry. The full reference implementation code and results are available at \url{https://github.com/VectorInstitute/NAA}

[1]  L. Nicolescu,et al.  Human-Computer Interaction in Customer Service: The Experience with AI Chatbots—A Systematic Literature Review , 2022, Electronics.

[2]  Ji-rong Wen,et al.  Learning towards conversational AI: A survey , 2022, AI Open.

[3]  Albert Y.S. Lam,et al.  Effectiveness of Pre-training for Few-shot Intent Classification , 2021, EMNLP.

[4]  Mitesh M. Khapra,et al.  A Tutorial on Evaluation Metrics used in Natural Language Generation , 2021, NAACL.

[5]  Luyu Gao,et al.  COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List , 2021, NAACL.

[6]  Dan Hendrycks,et al.  CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review , 2021, NeurIPS Datasets and Benchmarks.

[7]  Elias Benussi,et al.  Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models , 2021, NeurIPS.

[8]  M. Zaharia,et al.  ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT , 2020, SIGIR.

[9]  Danqi Chen,et al.  Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.

[10]  Matthew Henderson,et al.  Efficient Intent Detection with Dual Sentence Encoders , 2020, NLP4CONVAI.

[11]  Jianfeng Gao,et al.  DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation , 2019, ACL.

[12]  R'emi Louf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[13]  Lingjia Tang,et al.  An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction , 2019, EMNLP.

[14]  Iryna Gurevych,et al.  Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks , 2019, EMNLP.

[15]  Jason Weston,et al.  ELI5: Long Form Question Answering , 2019, ACL.

[16]  Kyunghyun Cho,et al.  Passage Re-ranking with BERT , 2019, ArXiv.

[17]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[18]  Jianfeng Gao,et al.  MS MARCO: A Human Generated MAchine Reading COmprehension Dataset , 2016, CoCo@NIPS.

[19]  Rico Sennrich,et al.  Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.

[20]  Alec Radford,et al.  Multimodal Conversational AI: A Survey of Datasets and Approaches , 2022, NLP4CONVAI.

[21]  Michael Cole,et al.  A Free Format Legal Question Answering System , 2021, NLLP.

[22]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[23]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.