Co-design Hardware and Algorithm for Vector Search
Vector search has emerged as the foundation for large-scale information retrieval and machine learning systems, with search engines like Google and Bing processing tens of thousands of queries per second on petabyte-scale document datasets by evaluating vector similarities between encoded query text...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vector search has emerged as the foundation for large-scale information
retrieval and machine learning systems, with search engines like Google and
Bing processing tens of thousands of queries per second on petabyte-scale
document datasets by evaluating vector similarities between encoded query texts
and web documents. As performance demands for vector search systems surge,
accelerated hardware offers a promising solution in the post-Moore's Law era.
We introduce \textit{FANNS}, an end-to-end and scalable vector search framework
on FPGAs. Given a user-provided recall requirement on a dataset and a hardware
resource budget, \textit{FANNS} automatically co-designs hardware and
algorithm, subsequently generating the corresponding accelerator. The framework
also supports scale-out by incorporating a hardware TCP/IP stack in the
accelerator. \textit{FANNS} attains up to 23.0$\times$ and 37.2$\times$ speedup
compared to FPGA and CPU baselines, respectively, and demonstrates superior
scalability to GPUs, achieving 5.5$\times$ and 7.6$\times$ speedup in median
and 95\textsuperscript{th} percentile (P95) latency within an eight-accelerator
configuration. The remarkable performance of \textit{FANNS} lays a robust
groundwork for future FPGA integration in data centers and AI supercomputers. |
---|---|
DOI: | 10.48550/arxiv.2306.11182 |