Word-embedding-based query expansion: Incorporating Deep Averaging Networks in Arabic document retrieval

One of the main issues associated with search engines is the query–document vocabulary mismatch problem, a long-standing problem in Information Retrieval (IR). This problem occurs when a user query does not match the content of stored documents, and it affects most search tasks. Automatic query expansion (AQE) is one of the most common approaches used to address this problem. Various AQE techniques have been proposed; these mainly involve finding synonyms or related words for the query terms. Word embedding (WE) is one of the methods that are currently receiving significant attention. Most of the existing AQE techniques focus on expanding the individual query terms rather the entire query during the expansion process, and this can lead to query drift if poor expansion terms are selected. In this article, we introduce Deep Averaging Networks (DANs), an architecture that feeds the average of the WE vectors produced by the Word2Vec toolkit for the terms in a query through several linear neural network layers. This average vector is assumed to represent the meaning of the query as a whole and can be used to find expansion terms that are relevant to the complete query. We explore the potential of DANs for AQE in Arabic document retrieval. We experiment with using DANs for AQE in the classic probabilistic BM25 model as well as for two recent expansion strategies: Embedding-Based Query Expansion approach (EQE1) and Prospect-Guided Query Expansion Strategy (V2Q). Although DANs did not improve all outcomes when used in the BM25 model, it outperformed all baselines when incorporated into the EQE1 and V2Q expansion strategies.

[1]  Mandar Mitra,et al.  Improving query expansion using WordNet , 2013, J. Assoc. Inf. Sci. Technol..

[2]  Nicholas J. Belkin,et al.  Ask for Information Retrieval: Part I. Background and Theory , 1997, J. Documentation.

[3]  Ibrahim Bounhas,et al.  Combining Indexing Units for Arabic Information Retrieval , 2016, Int. J. Softw. Innov..

[4]  Claudio Carpineto,et al.  A Survey of Automatic Query Expansion in Information Retrieval , 2012, CSUR.

[5]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[6]  Noraziah Ahmad,et al.  A Taxonomy and Survey of Semantic Approaches for Query Expansion , 2019, IEEE Access.

[7]  Akshay Deepak,et al.  Query Expansion Techniques for Information Retrieval: a Survey , 2017, Inf. Process. Manag..

[8]  Koji Zettsu,et al.  Spatio‐temporal pseudo relevance feedback for scientific data retrieval , 2017 .

[9]  Ibrahim Abu El-Khair,et al.  Arabic information retrieval , 2007, Annu. Rev. Inf. Sci. Technol..

[10]  Sungzoon Cho,et al.  Bag-of-concepts: Comprehending document representation through clustering words in distributed representation , 2017, Neurocomputing.

[11]  Giuseppe De Pietro,et al.  Hybrid query expansion using lexical resources and word embeddings for sentence retrieval in question answering , 2020, Inf. Sci..

[12]  M. de Rijke,et al.  A Survey of Query Auto Completion in Information Retrieval , 2016, Found. Trends Inf. Retr..

[13]  Abdelkader El Mahdaouy,et al.  Improving Arabic information retrieval using word embedding similarities , 2018, Int. J. Speech Technol..

[14]  Patrick Pantel,et al.  From Frequency to Meaning: Vector Space Models of Semantics , 2010, J. Artif. Intell. Res..

[15]  Francis C. Fernández-Reyes,et al.  A Prospect-Guided global query expansion strategy using word embeddings , 2018, Inf. Process. Manag..

[16]  Said Ouatik El Alaoui,et al.  Word-embedding-based pseudo-relevance feedback for Arabic information retrieval , 2018, J. Inf. Sci..