FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding
The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, whe...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The development of Long-Context Large Language Models (LLMs) has markedly
advanced natural language processing by facilitating the process of textual
data across long documents and multiple corpora. However, Long-Context LLMs
still face two critical challenges: The lost in the middle phenomenon, where
crucial middle-context information is likely to be missed, and the distraction
issue that the models lose focus due to overly extended contexts. To address
these challenges, we propose the Context Filtering Language Model (FltLM), a
novel integrated Long-Context LLM which enhances the ability of the model on
multi-document question-answering (QA) tasks. Specifically, FltLM innovatively
incorporates a context filter with a soft mask mechanism, identifying and
dynamically excluding irrelevant content to concentrate on pertinent
information for better comprehension and reasoning. Our approach not only
mitigates these two challenges, but also enables the model to operate
conveniently in a single forward pass. Experimental results demonstrate that
FltLM significantly outperforms supervised fine-tuning and retrieval-based
methods in complex QA scenarios, suggesting a promising solution for more
accurate and reliable long-context natural language understanding applications. |
---|---|
DOI: | 10.48550/arxiv.2410.06886 |