Query and Answer Expansion from Conversation History

In this paper, we present our methods, experimental analysis, and final submissions for the Conversational Assistance Track (CAsT) at TREC 2019. In addition to language understanding, extracting knowledge from historical dialogues (e.g., previous queries, searching results) is a key to the conversational IR task. However, limited annotated data in the CAsT task makes machine learning or other data-driven approaches infeasible. Along this line, we propose two ad hoc and intuitive approaches: Historical Query Expansion and Historical Answer Expansion, to improve the performance of the conversational IR system with limited training data. Our empirical result on the CAsT training set shows that the proposed methods significantly improve the quality of conversational search in terms of retrieval (recall@1000: 0.774→ 0.844) and ranking (mAP: 0.187→ 0.197) compared to our strong baseline. As a result, our submitted entries outperform the median performance of all the 21 teams. ACM Reference Format: Jheng-Hong Yang⋆ Sheng-Chieh Lin⋆ Jimmy Lin† and MingFeng Tsai‡ Chuan-Ju Wang⋆. 2019. Query and Answer Expansion from Conversation History. In TREC ’19: Text REtrieval Conference, Nov 13–15, 2019, Gaithersburg, Maryland. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn