QUITO: Accelerating Long-Context Reasoning through Query-Guided Context Compression
In-context learning (ICL) capabilities are foundational to the success of large language models (LLMs). Recently, context compression has attracted growing interest since it can largely reduce reasoning complexities and computation costs of LLMs. In this paper, we introduce a novel Query-gUIded aTte...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In-context learning (ICL) capabilities are foundational to the success of
large language models (LLMs). Recently, context compression has attracted
growing interest since it can largely reduce reasoning complexities and
computation costs of LLMs. In this paper, we introduce a novel Query-gUIded
aTtention cOmpression (QUITO) method, which leverages attention of the question
over the contexts to filter useless information. Specifically, we take a
trigger token to calculate the attention distribution of the context in
response to the question. Based on the distribution, we propose three different
filtering methods to satisfy the budget constraints of the context length. We
evaluate the QUITO using two widely-used datasets, namely, NaturalQuestions and
ASQA. Experimental results demonstrate that QUITO significantly outperforms
established baselines across various datasets and downstream LLMs, underscoring
its effectiveness. Our code is available at
https://github.com/Wenshansilvia/attention_compressor. |
---|---|
DOI: | 10.48550/arxiv.2408.00274 |