LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Although large language models (LLMs) demonstrate impressive performance for many language tasks, most of them can only handle texts a few thousand tokens long, limiting their applications on longer sequence inputs, such as books, reports, and codebases. Recent works have proposed methods to improve...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bai, Yushi Lv, Xin Zhang, Jiajie Lyu, Hongchang Tang, Jiankai Huang, Zhidian Du, Zhengxiao Liu, Xiao Zeng, Aohan Hou, Lei Dong, Yuxiao Tang, Jie Li, Juanzi |
description | Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench. |
doi_str_mv | 10.48550/arxiv.2308.14508 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2308_14508</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308_14508</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-3ec3e99242f7d2c0a7d2a7d5cbc6e0b4c5c4442fceb955da2972e678ec357c6c3</originalsourceid><addsrcrecordid>eNotj8tSwzAMRb1hwRQ-gBX-gCa4fuTBrs3wnDBs2nVGUZTiaXAYx2XK3-MGFpIWV0ejw9jNSqS6MEbcgT_Z71QqUaQrbURxyV7r0e035PDjnq_5xg7W7Y8wLPnbcQg2wHTgc_oJ_sD70fPzPq9GF-gU-M515KcArovYFbvoYZjo-n8u2PbxYVs9J_X700u1rhPI8iJRhIrKUmrZ551EAbHHMthiRqLVaFDrGCK1pTEdyDKXFMGImRwzVAt2-3d2lmm-vI2__TRnqWaWUr8LE0en</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding</title><source>arXiv.org</source><creator>Bai, Yushi ; Lv, Xin ; Zhang, Jiajie ; Lyu, Hongchang ; Tang, Jiankai ; Huang, Zhidian ; Du, Zhengxiao ; Liu, Xiao ; Zeng, Aohan ; Hou, Lei ; Dong, Yuxiao ; Tang, Jie ; Li, Juanzi</creator><creatorcontrib>Bai, Yushi ; Lv, Xin ; Zhang, Jiajie ; Lyu, Hongchang ; Tang, Jiankai ; Huang, Zhidian ; Du, Zhengxiao ; Liu, Xiao ; Zeng, Aohan ; Hou, Lei ; Dong, Yuxiao ; Tang, Jie ; Li, Juanzi</creatorcontrib><description>Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench.</description><identifier>DOI: 10.48550/arxiv.2308.14508</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2308.14508$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.14508$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bai, Yushi</creatorcontrib><creatorcontrib>Lv, Xin</creatorcontrib><creatorcontrib>Zhang, Jiajie</creatorcontrib><creatorcontrib>Lyu, Hongchang</creatorcontrib><creatorcontrib>Tang, Jiankai</creatorcontrib><creatorcontrib>Huang, Zhidian</creatorcontrib><creatorcontrib>Du, Zhengxiao</creatorcontrib><creatorcontrib>Liu, Xiao</creatorcontrib><creatorcontrib>Zeng, Aohan</creatorcontrib><creatorcontrib>Hou, Lei</creatorcontrib><creatorcontrib>Dong, Yuxiao</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Li, Juanzi</creatorcontrib><title>LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding</title><description>Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tSwzAMRb1hwRQ-gBX-gCa4fuTBrs3wnDBs2nVGUZTiaXAYx2XK3-MGFpIWV0ejw9jNSqS6MEbcgT_Z71QqUaQrbURxyV7r0e035PDjnq_5xg7W7Y8wLPnbcQg2wHTgc_oJ_sD70fPzPq9GF-gU-M515KcArovYFbvoYZjo-n8u2PbxYVs9J_X700u1rhPI8iJRhIrKUmrZ551EAbHHMthiRqLVaFDrGCK1pTEdyDKXFMGImRwzVAt2-3d2lmm-vI2__TRnqWaWUr8LE0en</recordid><startdate>20230828</startdate><enddate>20230828</enddate><creator>Bai, Yushi</creator><creator>Lv, Xin</creator><creator>Zhang, Jiajie</creator><creator>Lyu, Hongchang</creator><creator>Tang, Jiankai</creator><creator>Huang, Zhidian</creator><creator>Du, Zhengxiao</creator><creator>Liu, Xiao</creator><creator>Zeng, Aohan</creator><creator>Hou, Lei</creator><creator>Dong, Yuxiao</creator><creator>Tang, Jie</creator><creator>Li, Juanzi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230828</creationdate><title>LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding</title><author>Bai, Yushi ; Lv, Xin ; Zhang, Jiajie ; Lyu, Hongchang ; Tang, Jiankai ; Huang, Zhidian ; Du, Zhengxiao ; Liu, Xiao ; Zeng, Aohan ; Hou, Lei ; Dong, Yuxiao ; Tang, Jie ; Li, Juanzi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-3ec3e99242f7d2c0a7d2a7d5cbc6e0b4c5c4442fceb955da2972e678ec357c6c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Bai, Yushi</creatorcontrib><creatorcontrib>Lv, Xin</creatorcontrib><creatorcontrib>Zhang, Jiajie</creatorcontrib><creatorcontrib>Lyu, Hongchang</creatorcontrib><creatorcontrib>Tang, Jiankai</creatorcontrib><creatorcontrib>Huang, Zhidian</creatorcontrib><creatorcontrib>Du, Zhengxiao</creatorcontrib><creatorcontrib>Liu, Xiao</creatorcontrib><creatorcontrib>Zeng, Aohan</creatorcontrib><creatorcontrib>Hou, Lei</creatorcontrib><creatorcontrib>Dong, Yuxiao</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Li, Juanzi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bai, Yushi</au><au>Lv, Xin</au><au>Zhang, Jiajie</au><au>Lyu, Hongchang</au><au>Tang, Jiankai</au><au>Huang, Zhidian</au><au>Du, Zhengxiao</au><au>Liu, Xiao</au><au>Zeng, Aohan</au><au>Hou, Lei</au><au>Dong, Yuxiao</au><au>Tang, Jie</au><au>Li, Juanzi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding</atitle><date>2023-08-28</date><risdate>2023</risdate><abstract>Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench.</abstract><doi>10.48550/arxiv.2308.14508</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2308.14508 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2308_14508 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T07%3A45%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LongBench:%20A%20Bilingual,%20Multitask%20Benchmark%20for%20Long%20Context%20Understanding&rft.au=Bai,%20Yushi&rft.date=2023-08-28&rft_id=info:doi/10.48550/arxiv.2308.14508&rft_dat=%3Carxiv_GOX%3E2308_14508%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |