ChuLo: Chunk-Level Key Information Representation for Long Document Processing
Transformer-based models have achieved remarkable success in various Natural Language Processing (NLP) tasks, yet their ability to handle long documents is constrained by computational limitations. Traditional approaches, such as truncating inputs, sparse self-attention, and chunking, attempt to mit...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer-based models have achieved remarkable success in various Natural
Language Processing (NLP) tasks, yet their ability to handle long documents is
constrained by computational limitations. Traditional approaches, such as
truncating inputs, sparse self-attention, and chunking, attempt to mitigate
these issues, but they often lead to information loss and hinder the model's
ability to capture long-range dependencies. In this paper, we introduce ChuLo,
a novel chunk representation method for long document classification that
addresses these limitations. Our ChuLo groups input tokens using unsupervised
keyphrase extraction, emphasizing semantically important keyphrase based chunk
to retain core document content while reducing input length. This approach
minimizes information loss and improves the efficiency of Transformer-based
models. Preserving all tokens in long document understanding, especially token
classification tasks, is especially important to ensure that fine-grained
annotations, which depend on the entire sequence context, are not lost. We
evaluate our method on multiple long document classification tasks and long
document token classification tasks, demonstrating its effectiveness through
comprehensive qualitative and quantitative analyses. |
---|---|
DOI: | 10.48550/arxiv.2410.11119 |