Characterizing Prompt Compression Methods for Long Context Inference

Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jha, Siddharth, Erdogan, Lutfi Eren, Kim, Sehoon, Keutzer, Kurt, Gholami, Amir
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jha, Siddharth
Erdogan, Lutfi Eren
Kim, Sehoon
Keutzer, Kurt
Gholami, Amir
description Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However, there has been little work on comparing the different proposed methods across different tasks through a standardized analysis. This has led to conflicting results. To address this, here we perform a comprehensive characterization and evaluation of different prompt compression methods. In particular, we analyze extractive compression, summarization-based abstractive compression, and token pruning methods. Surprisingly, we find that extractive compression often outperforms all the other approaches, and enables up to 10x compression with minimal accuracy degradation. Interestingly, we also find that despite several recent claims, token pruning methods often lag behind extractive compression. We only found marginal improvements on summarization tasks.
doi_str_mv 10.48550/arxiv.2407.08892
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_08892</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_08892</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_088923</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwsLA04mRwcc5ILEpMLkktyqzKzEtXCCjKzy0oUXAGkkWpxcWZ-XkKvqklGfkpxQpp-UUKPvlANc75eSWpFSUKnnlpqUWpecmpPAysaYk5xam8UJqbQd7NNcTZQxdsX3xBUWZuYlFlPMjeeLC9xoRVAAAuEjix</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Characterizing Prompt Compression Methods for Long Context Inference</title><source>arXiv.org</source><creator>Jha, Siddharth ; Erdogan, Lutfi Eren ; Kim, Sehoon ; Keutzer, Kurt ; Gholami, Amir</creator><creatorcontrib>Jha, Siddharth ; Erdogan, Lutfi Eren ; Kim, Sehoon ; Keutzer, Kurt ; Gholami, Amir</creatorcontrib><description>Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However, there has been little work on comparing the different proposed methods across different tasks through a standardized analysis. This has led to conflicting results. To address this, here we perform a comprehensive characterization and evaluation of different prompt compression methods. In particular, we analyze extractive compression, summarization-based abstractive compression, and token pruning methods. Surprisingly, we find that extractive compression often outperforms all the other approaches, and enables up to 10x compression with minimal accuracy degradation. Interestingly, we also find that despite several recent claims, token pruning methods often lag behind extractive compression. We only found marginal improvements on summarization tasks.</description><identifier>DOI: 10.48550/arxiv.2407.08892</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.08892$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.08892$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jha, Siddharth</creatorcontrib><creatorcontrib>Erdogan, Lutfi Eren</creatorcontrib><creatorcontrib>Kim, Sehoon</creatorcontrib><creatorcontrib>Keutzer, Kurt</creatorcontrib><creatorcontrib>Gholami, Amir</creatorcontrib><title>Characterizing Prompt Compression Methods for Long Context Inference</title><description>Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However, there has been little work on comparing the different proposed methods across different tasks through a standardized analysis. This has led to conflicting results. To address this, here we perform a comprehensive characterization and evaluation of different prompt compression methods. In particular, we analyze extractive compression, summarization-based abstractive compression, and token pruning methods. Surprisingly, we find that extractive compression often outperforms all the other approaches, and enables up to 10x compression with minimal accuracy degradation. Interestingly, we also find that despite several recent claims, token pruning methods often lag behind extractive compression. We only found marginal improvements on summarization tasks.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwsLA04mRwcc5ILEpMLkktyqzKzEtXCCjKzy0oUXAGkkWpxcWZ-XkKvqklGfkpxQpp-UUKPvlANc75eSWpFSUKnnlpqUWpecmpPAysaYk5xam8UJqbQd7NNcTZQxdsX3xBUWZuYlFlPMjeeLC9xoRVAAAuEjix</recordid><startdate>20240711</startdate><enddate>20240711</enddate><creator>Jha, Siddharth</creator><creator>Erdogan, Lutfi Eren</creator><creator>Kim, Sehoon</creator><creator>Keutzer, Kurt</creator><creator>Gholami, Amir</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240711</creationdate><title>Characterizing Prompt Compression Methods for Long Context Inference</title><author>Jha, Siddharth ; Erdogan, Lutfi Eren ; Kim, Sehoon ; Keutzer, Kurt ; Gholami, Amir</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_088923</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jha, Siddharth</creatorcontrib><creatorcontrib>Erdogan, Lutfi Eren</creatorcontrib><creatorcontrib>Kim, Sehoon</creatorcontrib><creatorcontrib>Keutzer, Kurt</creatorcontrib><creatorcontrib>Gholami, Amir</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jha, Siddharth</au><au>Erdogan, Lutfi Eren</au><au>Kim, Sehoon</au><au>Keutzer, Kurt</au><au>Gholami, Amir</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Characterizing Prompt Compression Methods for Long Context Inference</atitle><date>2024-07-11</date><risdate>2024</risdate><abstract>Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However, there has been little work on comparing the different proposed methods across different tasks through a standardized analysis. This has led to conflicting results. To address this, here we perform a comprehensive characterization and evaluation of different prompt compression methods. In particular, we analyze extractive compression, summarization-based abstractive compression, and token pruning methods. Surprisingly, we find that extractive compression often outperforms all the other approaches, and enables up to 10x compression with minimal accuracy degradation. Interestingly, we also find that despite several recent claims, token pruning methods often lag behind extractive compression. We only found marginal improvements on summarization tasks.</abstract><doi>10.48550/arxiv.2407.08892</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.08892
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_08892
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Characterizing Prompt Compression Methods for Long Context Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T05%3A21%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Characterizing%20Prompt%20Compression%20Methods%20for%20Long%20Context%20Inference&rft.au=Jha,%20Siddharth&rft.date=2024-07-11&rft_id=info:doi/10.48550/arxiv.2407.08892&rft_dat=%3Carxiv_GOX%3E2407_08892%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true