iTA: A Digital Teaching Assistant

Designed and implemented a question-answering chatbot, dubbed iTA (intelligent Teaching Assistant), which can provide detailed answers to questions by effectively identifying the most relevant answers in “long” text sources (documents or textbooks). iTA answers questions by implementing a two-stage procedure. First, the topmost relevant paragraphs are identified in the selected text source using a retrieval-based approach and scores for the retrieved paragraphs are computed. Second, using a generative model, extracted the relevant content from the top-ranked paragraph to generate the answer. Our results show that iTA is well suited to generate meaningful answers for questions posed by students.

[1]  Arman Cohan,et al.  Longformer: The Long-Document Transformer , 2020, ArXiv.

[2]  Omer Levy,et al.  BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.

[3]  P. Paolini,et al.  Adaptive Conversations for Adaptive Learning: Sustainable Development of Educational Chatbots , 2020 .

[4]  Marita Skjuve "From Start to Finish": Chatbots Supporting Students Through Their Student Journey , 2020 .

[5]  Conversational Agents to Promote Children's Verbal Communication Skills , 2020, CONVERSATIONS.

[6]  Avrim Blum,et al.  Foundations of Data Science , 2020 .

[7]  Jason Weston,et al.  ELI5: Long Form Question Answering , 2019, ACL.

[8]  Danqi Chen,et al.  CoQA: A Conversational Question Answering Challenge , 2018, TACL.

[9]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[10]  Sebastian Hobert,et al.  Say Hello to 'Coding Tutor'! Design and Evaluation of a Chatbot-based Learning System Supporting Students to Learn to Program , 2019, ICIS.

[11]  Thang Le Dinh,et al.  Intelligent Assistants in Higher-Education Environments: The FIT-EBot, a Chatbot for Administrative and Learning Support , 2018, SoICT.

[12]  Wei Wang,et al.  Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering , 2018, ACL.

[13]  Yoshua Bengio,et al.  HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.

[14]  Kaigui Bian,et al.  Towards Reading Comprehension for Long Documents , 2018, IJCAI.

[15]  Percy Liang,et al.  Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.

[16]  Christopher Clark,et al.  Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.

[17]  Alec Radford,et al.  Improving Language Understanding by Generative Pre-Training , 2018 .

[18]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[19]  Eunsol Choi,et al.  TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.

[20]  Ali Farhadi,et al.  Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.

[21]  Jianfeng Gao,et al.  A Human Generated MAchine Reading COmprehension Dataset , 2018 .

[22]  Shuohang Wang,et al.  Learning Natural Language Inference with LSTM , 2015, NAACL.

[23]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[24]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[25]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.