Characterizing Prompt Compression Methods for Long Context Inference
Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Long context inference presents challenges at the system level with increased
compute and memory requirements, as well as from an accuracy perspective in
being able to reason over long contexts. Recently, several methods have been
proposed to compress the prompt to reduce the context length. However, there
has been little work on comparing the different proposed methods across
different tasks through a standardized analysis. This has led to conflicting
results. To address this, here we perform a comprehensive characterization and
evaluation of different prompt compression methods. In particular, we analyze
extractive compression, summarization-based abstractive compression, and token
pruning methods. Surprisingly, we find that extractive compression often
outperforms all the other approaches, and enables up to 10x compression with
minimal accuracy degradation. Interestingly, we also find that despite several
recent claims, token pruning methods often lag behind extractive compression.
We only found marginal improvements on summarization tasks. |
---|---|
DOI: | 10.48550/arxiv.2407.08892 |