LIDER: An Efficient High-dimensional Learned Index for Large-scale Dense Passage Retrieval

Many recent approaches of passage retrieval are using dense embeddings generated from deep neural models, called"dense passage retrieval". The state-of-the-art end-to-end dense passage retrieval systems normally deploy a deep neural model followed by an approximate nearest neighbor (ANN) search module. The model generates embeddings of the corpus and queries, which are then indexed and searched by the high-performance ANN module. With the increasing data scale, the ANN module unavoidably becomes the bottleneck on efficiency. An alternative is the learned index, which achieves significantly high search efficiency by learning the data distribution and predicting the target data location. But most of the existing learned indexes are designed for low dimensional data, which are not suitable for dense passage retrieval with high-dimensional dense embeddings. In this paper, we propose LIDER, an efficient high-dimensional Learned Index for large-scale DEnse passage Retrieval. LIDER has a clustering-based hierarchical architecture formed by two layers of core models. As the basic unit of LIDER to index and search data, a core model includes an adapted recursive model index (RMI) and a dimension reduction component which consists of an extended SortingKeys-LSH (SK-LSH) and a key re-scaling module. The dimension reduction component reduces the high-dimensional dense embeddings into one-dimensional keys and sorts them in a specific order, which are then used by the RMI to make fast prediction. Experiments show that LIDER has a higher search speed with high retrieval quality comparing to the state-of-the-art ANN indexes on passage retrieval tasks, e.g., on large-scale data it achieves 1.2x search speed and significantly higher retrieval quality than the fastest baseline in our evaluation. Furthermore, LIDER has a better capability of speed-quality trade-off.

[1]  Clayton D. Scott,et al.  IEEE Transactions on Pattern Analysis and Machine Intelligence , 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Xiaofang Zhou,et al.  A Learned Index for Exact Similarity Search in Metric Spaces , 2022, IEEE Transactions on Knowledge and Data Engineering.

[3]  Yan Wang,et al.  Exploring Dense Retrieval for Dialogue Response Selection , 2021, ACM Transactions on Information Systems.

[4]  Jiafeng Guo,et al.  Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval , 2021, WSDM.

[5]  Craig Macdonald,et al.  On Approximate Nearest Neighbour Selection for Multi-Stage Dense Retrieval , 2021, CIKM.

[6]  Hamed Zamani,et al.  Learning Robust Dense Retrieval Models from Incomplete Relevance Labels , 2021, SIGIR.

[7]  Ikuya Yamada,et al.  Efficient Passage Retrieval with Hashing for Open-domain Question Answering , 2021, ACL.

[8]  Beihong Jin,et al.  Improving Document Representations by Generating Pseudo Query Embeddings for Dense Retrieval , 2021, ACL.

[9]  Iryna Gurevych,et al.  The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes , 2020, ACL.

[10]  Jimmy J. Lin,et al.  Distilling Dense Representations for Ranking using Tightly-Coupled Teachers , 2020, ArXiv.

[11]  Andreas Kipf,et al.  The Case for Learned Spatial Indexes , 2020, AIDB@VLDB.

[12]  Jamie Callan,et al.  Context-Aware Term Weighting For First Stage Passage Retrieval , 2020, SIGIR.

[13]  Mohand Boughanem,et al.  MarkedBERT: Integrating Traditional IR Cues in Pre-trained Language Models for Passage Retrieval , 2020, SIGIR.

[14]  Paul N. Bennett,et al.  Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval , 2020, ICLR.

[15]  M. Zaharia,et al.  ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT , 2020, SIGIR.

[16]  Danqi Chen,et al.  Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.

[17]  Tim Kraska,et al.  Learning Multi-Dimensional Indexes , 2019, SIGMOD Conference.

[18]  Luke Zettlemoyer,et al.  Zero-shot Entity Linking with Dense Entity Retrieval , 2019, ArXiv.

[19]  Thomas Wolf,et al.  DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.

[20]  Sanjiv Kumar,et al.  Accelerating Large-Scale Inference with Anisotropic Vector Quantization , 2019, ICML.

[21]  Iryna Gurevych,et al.  Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks , 2019, EMNLP.

[22]  Jianliang Xu,et al.  Learned Index for Spatial Queries , 2019, 2019 20th IEEE International Conference on Mobile Data Management (MDM).

[23]  Anil K. Jain,et al.  Probabilistic Face Embeddings , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[24]  Jimmy J. Lin,et al.  Document Expansion by Query Prediction , 2019, ArXiv.

[25]  Nicolas Usunier,et al.  Canonical Tensor Decomposition for Knowledge Base Completion , 2018, ICML.

[26]  Tim Kraska,et al.  The Case for Learned Index Structures , 2018 .

[27]  Jeff Johnson,et al.  Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.

[28]  Jianfeng Gao,et al.  A Human Generated MAchine Reading COmprehension Dataset , 2018 .

[29]  Yury A. Malkov,et al.  Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[30]  Alexandr Andoni,et al.  Practical and Optimal LSH for Angular Distance , 2015, NIPS.

[31]  Zi Huang,et al.  SK-LSH: An Efficient Index Structure for Approximate Nearest Neighbor Search , 2014, Proc. VLDB Endow..

[32]  Jian Sun,et al.  Optimized Product Quantization for Approximate Nearest Neighbor Search , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[34]  Victor S. Lempitsky,et al.  The Inverted Multi-Index , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[35]  Yasin Abbasi-Yadkori,et al.  Fast Approximate Nearest-Neighbor Search with k-Nearest Neighbor Graph , 2011, IJCAI.

[36]  Kai Li,et al.  Efficient k-nearest neighbor graph construction for generic similarity measures , 2011, WWW.

[37]  Cordelia Schmid,et al.  Aggregating local descriptors into a compact image representation , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[38]  Zhe Wang,et al.  Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search , 2007, VLDB.

[39]  Moses Charikar,et al.  Similarity estimation techniques from rounding algorithms , 2002, STOC '02.

[40]  Evica Milchevski,et al.  The ML-Index: A Multidimensional, Learned Index for Point, Range, and Nearest-Neighbor Queries , 2020, EDBT.

[41]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[42]  D. Cheriton From doc2query to docTTTTTquery , 2019 .

[43]  Tim Kraska,et al.  SageDB: A Learned Database System , 2019, CIDR.

[44]  Cordelia Schmid,et al.  Product Quantization for Nearest Neighbor Search , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.