M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models
Managing long sequences has become an important and necessary feature for large language models (LLMs). However, it is still an open question of how to comprehensively and systematically evaluate the long-sequence capability of LLMs. One of the reasons is that conventional and widely-used benchmarks...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Managing long sequences has become an important and necessary feature for
large language models (LLMs). However, it is still an open question of how to
comprehensively and systematically evaluate the long-sequence capability of
LLMs. One of the reasons is that conventional and widely-used benchmarks mainly
consist of short sequences. In this paper, we propose M4LE, a Multi-ability,
Multi-range, Multi-task, Multi-domain benchmark for Long-context Evaluation.
M4LE is based on a diverse NLP task pool comprising 36 NLP datasets, 11 task
types and 12 domains. To alleviate the scarcity of tasks with naturally long
sequences and incorporate multiple-ability assessment, we propose an automatic
approach (but with negligible human annotations) to convert short-sequence
tasks into a unified long-sequence scenario where LLMs have to identify single
or multiple relevant spans in long contexts based on explicit or semantic
hints. Specifically, the scenario includes five different types of abilities:
(1) explicit single-span; (2) semantic single-span; (3) explicit multiple-span;
(4) semantic multiple-span; and (5) global context understanding. The resulting
samples in M4LE are evenly distributed from 1k to 8k input length. We conducted
a systematic evaluation on 11 well-established LLMs, especially those optimized
for long-sequence inputs. Our results reveal that: 1) Current LLMs struggle to
understand long context, particularly when tasks require multiple-span
attention. 2) Semantic retrieval task is more difficult for competent LLMs. 3)
Models fine-tuned on longer text with position interpolation have comparable
performance to those using Neural Tangent Kernel (NTK) aware scaling methods
without fine-tuning. We make our benchmark publicly available to encourage
future research in this challenging area. |
---|---|
DOI: | 10.48550/arxiv.2310.19240 |