FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction
Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks. However, it is challenging to correctly serialize tokens in form-like documents in practice due to their variety of layout patterns. We propose FormNet, a structure-aware sequence mo...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lee, Chen-Yu Li, Chun-Liang Dozat, Timothy Perot, Vincent Su, Guolong Hua, Nan Ainslie, Joshua Wang, Renshen Fujii, Yasuhisa Pfister, Tomas |
description | Sequence modeling has demonstrated state-of-the-art performance on natural
language and document understanding tasks. However, it is challenging to
correctly serialize tokens in form-like documents in practice due to their
variety of layout patterns. We propose FormNet, a structure-aware sequence
model to mitigate the suboptimal serialization of forms. First, we design Rich
Attention that leverages the spatial relationship between tokens in a form for
more precise attention score calculation. Second, we construct Super-Tokens for
each word by embedding representations from their neighboring tokens through
graph convolutions. FormNet therefore explicitly recovers local syntactic
information that may have been lost during serialization. In experiments,
FormNet outperforms existing methods with a more compact model size and less
pre-training data, establishing new state-of-the-art performance on CORD, FUNSD
and Payment benchmarks. |
doi_str_mv | 10.48550/arxiv.2203.08411 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_08411</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_08411</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-626a4e4f3f347b5ede57cca4e4c23f80ca258cebaf7720075ce03caffa129cb3</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwAr_QIJjx3Fgh0oKlQoswj6aTMbIUmKDcVD79zSlq3kc3Ssdxm4KkZe11uIO4t795lIKlYu6LIpL1m1CnN4oPfA2xRnTHGHkjccwOP_JezoEP_CWvmfyyR3RaxhoXJDzfInyp4DzdIR86-3xhuSC580-RcBlvWIXFsYfuj7PFWs3zcf6Jdu9P2_Xj7sMKlNklaygpNIqq0rTaxpIG8TlhVLZWiBIXSP1YI2RQhiNJBSCtVDIe-zVit3-t54Eu6_oJoiHbhHtTqLqD8HqUFw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction</title><source>arXiv.org</source><creator>Lee, Chen-Yu ; Li, Chun-Liang ; Dozat, Timothy ; Perot, Vincent ; Su, Guolong ; Hua, Nan ; Ainslie, Joshua ; Wang, Renshen ; Fujii, Yasuhisa ; Pfister, Tomas</creator><creatorcontrib>Lee, Chen-Yu ; Li, Chun-Liang ; Dozat, Timothy ; Perot, Vincent ; Su, Guolong ; Hua, Nan ; Ainslie, Joshua ; Wang, Renshen ; Fujii, Yasuhisa ; Pfister, Tomas</creatorcontrib><description>Sequence modeling has demonstrated state-of-the-art performance on natural
language and document understanding tasks. However, it is challenging to
correctly serialize tokens in form-like documents in practice due to their
variety of layout patterns. We propose FormNet, a structure-aware sequence
model to mitigate the suboptimal serialization of forms. First, we design Rich
Attention that leverages the spatial relationship between tokens in a form for
more precise attention score calculation. Second, we construct Super-Tokens for
each word by embedding representations from their neighboring tokens through
graph convolutions. FormNet therefore explicitly recovers local syntactic
information that may have been lost during serialization. In experiments,
FormNet outperforms existing methods with a more compact model size and less
pre-training data, establishing new state-of-the-art performance on CORD, FUNSD
and Payment benchmarks.</description><identifier>DOI: 10.48550/arxiv.2203.08411</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.08411$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.08411$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Chen-Yu</creatorcontrib><creatorcontrib>Li, Chun-Liang</creatorcontrib><creatorcontrib>Dozat, Timothy</creatorcontrib><creatorcontrib>Perot, Vincent</creatorcontrib><creatorcontrib>Su, Guolong</creatorcontrib><creatorcontrib>Hua, Nan</creatorcontrib><creatorcontrib>Ainslie, Joshua</creatorcontrib><creatorcontrib>Wang, Renshen</creatorcontrib><creatorcontrib>Fujii, Yasuhisa</creatorcontrib><creatorcontrib>Pfister, Tomas</creatorcontrib><title>FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction</title><description>Sequence modeling has demonstrated state-of-the-art performance on natural
language and document understanding tasks. However, it is challenging to
correctly serialize tokens in form-like documents in practice due to their
variety of layout patterns. We propose FormNet, a structure-aware sequence
model to mitigate the suboptimal serialization of forms. First, we design Rich
Attention that leverages the spatial relationship between tokens in a form for
more precise attention score calculation. Second, we construct Super-Tokens for
each word by embedding representations from their neighboring tokens through
graph convolutions. FormNet therefore explicitly recovers local syntactic
information that may have been lost during serialization. In experiments,
FormNet outperforms existing methods with a more compact model size and less
pre-training data, establishing new state-of-the-art performance on CORD, FUNSD
and Payment benchmarks.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwAr_QIJjx3Fgh0oKlQoswj6aTMbIUmKDcVD79zSlq3kc3Ssdxm4KkZe11uIO4t795lIKlYu6LIpL1m1CnN4oPfA2xRnTHGHkjccwOP_JezoEP_CWvmfyyR3RaxhoXJDzfInyp4DzdIR86-3xhuSC580-RcBlvWIXFsYfuj7PFWs3zcf6Jdu9P2_Xj7sMKlNklaygpNIqq0rTaxpIG8TlhVLZWiBIXSP1YI2RQhiNJBSCtVDIe-zVit3-t54Eu6_oJoiHbhHtTqLqD8HqUFw</recordid><startdate>20220316</startdate><enddate>20220316</enddate><creator>Lee, Chen-Yu</creator><creator>Li, Chun-Liang</creator><creator>Dozat, Timothy</creator><creator>Perot, Vincent</creator><creator>Su, Guolong</creator><creator>Hua, Nan</creator><creator>Ainslie, Joshua</creator><creator>Wang, Renshen</creator><creator>Fujii, Yasuhisa</creator><creator>Pfister, Tomas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220316</creationdate><title>FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction</title><author>Lee, Chen-Yu ; Li, Chun-Liang ; Dozat, Timothy ; Perot, Vincent ; Su, Guolong ; Hua, Nan ; Ainslie, Joshua ; Wang, Renshen ; Fujii, Yasuhisa ; Pfister, Tomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-626a4e4f3f347b5ede57cca4e4c23f80ca258cebaf7720075ce03caffa129cb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Chen-Yu</creatorcontrib><creatorcontrib>Li, Chun-Liang</creatorcontrib><creatorcontrib>Dozat, Timothy</creatorcontrib><creatorcontrib>Perot, Vincent</creatorcontrib><creatorcontrib>Su, Guolong</creatorcontrib><creatorcontrib>Hua, Nan</creatorcontrib><creatorcontrib>Ainslie, Joshua</creatorcontrib><creatorcontrib>Wang, Renshen</creatorcontrib><creatorcontrib>Fujii, Yasuhisa</creatorcontrib><creatorcontrib>Pfister, Tomas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Chen-Yu</au><au>Li, Chun-Liang</au><au>Dozat, Timothy</au><au>Perot, Vincent</au><au>Su, Guolong</au><au>Hua, Nan</au><au>Ainslie, Joshua</au><au>Wang, Renshen</au><au>Fujii, Yasuhisa</au><au>Pfister, Tomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction</atitle><date>2022-03-16</date><risdate>2022</risdate><abstract>Sequence modeling has demonstrated state-of-the-art performance on natural
language and document understanding tasks. However, it is challenging to
correctly serialize tokens in form-like documents in practice due to their
variety of layout patterns. We propose FormNet, a structure-aware sequence
model to mitigate the suboptimal serialization of forms. First, we design Rich
Attention that leverages the spatial relationship between tokens in a form for
more precise attention score calculation. Second, we construct Super-Tokens for
each word by embedding representations from their neighboring tokens through
graph convolutions. FormNet therefore explicitly recovers local syntactic
information that may have been lost during serialization. In experiments,
FormNet outperforms existing methods with a more compact model size and less
pre-training data, establishing new state-of-the-art performance on CORD, FUNSD
and Payment benchmarks.</abstract><doi>10.48550/arxiv.2203.08411</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2203.08411 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2203_08411 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T18%3A39%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FormNet:%20Structural%20Encoding%20beyond%20Sequential%20Modeling%20in%20Form%20Document%20Information%20Extraction&rft.au=Lee,%20Chen-Yu&rft.date=2022-03-16&rft_id=info:doi/10.48550/arxiv.2203.08411&rft_dat=%3Carxiv_GOX%3E2203_08411%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |