Results of the NeurIPS'21 Challenge on Billion-Scale Approximate Nearest Neighbor Search
Despite the broad range of algorithms for Approximate Nearest Neighbor Search, most empirical evaluations of algorithms have focused on smaller datasets, typically of 1 million points~\citep{Benchmark}. However, deploying recent advances in embedding based techniques for search, recommendation and r...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite the broad range of algorithms for Approximate Nearest Neighbor
Search, most empirical evaluations of algorithms have focused on smaller
datasets, typically of 1 million points~\citep{Benchmark}. However, deploying
recent advances in embedding based techniques for search, recommendation and
ranking at scale require ANNS indices at billion, trillion or larger scale.
Barring a few recent papers, there is limited consensus on which algorithms are
effective at this scale vis-\`a-vis their hardware cost.
This competition compares ANNS algorithms at billion-scale by hardware cost,
accuracy and performance. We set up an open source evaluation framework and
leaderboards for both standardized and specialized hardware. The competition
involves three tracks. The standard hardware track T1 evaluates algorithms on
an Azure VM with limited DRAM, often the bottleneck in serving billion-scale
indices, where the embedding data can be hundreds of GigaBytes in size. It uses
FAISS~\citep{Faiss17} as the baseline. The standard hardware track T2
additional allows inexpensive SSDs in addition to the limited DRAM and uses
DiskANN~\citep{DiskANN19} as the baseline. The specialized hardware track T3
allows any hardware configuration, and again uses FAISS as the baseline.
We compiled six diverse billion-scale datasets, four newly released for this
competition, that span a variety of modalities, data types, dimensions, deep
learning models, distance functions and sources. The outcome of the competition
was ranked leaderboards of algorithms in each track based on recall at a query
throughput threshold. Additionally, for track T3, separate leaderboards were
created based on recall as well as cost-normalized and power-normalized query
throughput. |
---|---|
DOI: | 10.48550/arxiv.2205.03763 |