PEneo: Unifying Line Extraction, Line Grouping, and Entity Linking for End-to-end Document Pair Extraction
Document pair extraction aims to identify key and value entities as well as their relationships from visually-rich documents. Most existing methods divide it into two separate tasks: semantic entity recognition (SER) and relation extraction (RE). However, simply concatenating SER and RE serially can...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Document pair extraction aims to identify key and value entities as well as
their relationships from visually-rich documents. Most existing methods divide
it into two separate tasks: semantic entity recognition (SER) and relation
extraction (RE). However, simply concatenating SER and RE serially can lead to
severe error propagation, and it fails to handle cases like multi-line entities
in real scenarios. To address these issues, this paper introduces a novel
framework, PEneo (Pair Extraction new decoder option), which performs document
pair extraction in a unified pipeline, incorporating three concurrent
sub-tasks: line extraction, line grouping, and entity linking. This approach
alleviates the error accumulation problem and can handle the case of multi-line
entities. Furthermore, to better evaluate the model's performance and to
facilitate future research on pair extraction, we introduce RFUND, a
re-annotated version of the commonly used FUNSD and XFUND datasets, to make
them more accurate and cover realistic situations. Experiments on various
benchmarks demonstrate PEneo's superiority over previous pipelines, boosting
the performance by a large margin (e.g., 19.89%-22.91% F1 score on RFUND-EN)
when combined with various backbones like LiLT and LayoutLMv3, showing its
effectiveness and generality. Codes and the new annotations are available at
https://github.com/ZeningLin/PEneo. |
---|---|
DOI: | 10.48550/arxiv.2401.03472 |