VTLayout: Fusion of Visual and Text Features for Document Layout Analysis

Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-08
Hauptverfasser: Li, Shoubin, Ma, Xuyan, Pan, Shuaiqun, Hu, Jun, Shi, Lin, Wang, Qing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Li, Shoubin
Ma, Xuyan
Pan, Shuaiqun
Hu, Jun
Shi, Lin
Wang, Qing
description Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting \emph{Figure} from documents, they are still unsatisfactory in recognizing the \emph{List}, \emph{Table}, \emph{Text} and \emph{Title} category blocks in DLA. This paper proposes a VTLayout model fusing the documents' deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2567813136</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2567813136</sourcerecordid><originalsourceid>FETCH-proquest_journals_25678131363</originalsourceid><addsrcrecordid>eNqNy8EKgkAQgOElCJLyHQY6C7rbqnSLSgo6ildZagXFdmpnB_LtC-oBOv2X75-JSCqVJeVGyoWIiYY0TWVeSK1VJM5NfTETcthCxdSjA-yg6YnNCMbdoLavAJU1gb0l6NDDAa98ty7A94OdM-NEPa3EvDMj2fjXpVhXx3p_Sh4en2wptAOy_2Bqpc6LMlOZytV_6g0z9Dvg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2567813136</pqid></control><display><type>article</type><title>VTLayout: Fusion of Visual and Text Features for Document Layout Analysis</title><source>Free E- Journals</source><creator>Li, Shoubin ; Ma, Xuyan ; Pan, Shuaiqun ; Hu, Jun ; Shi, Lin ; Wang, Qing</creator><creatorcontrib>Li, Shoubin ; Ma, Xuyan ; Pan, Shuaiqun ; Hu, Jun ; Shi, Lin ; Wang, Qing</creatorcontrib><description>Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting \emph{Figure} from documents, they are still unsatisfactory in recognizing the \emph{List}, \emph{Table}, \emph{Text} and \emph{Title} category blocks in DLA. This paper proposes a VTLayout model fusing the documents' deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer vision ; Feature extraction ; Layouts ; Machine learning</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Li, Shoubin</creatorcontrib><creatorcontrib>Ma, Xuyan</creatorcontrib><creatorcontrib>Pan, Shuaiqun</creatorcontrib><creatorcontrib>Hu, Jun</creatorcontrib><creatorcontrib>Shi, Lin</creatorcontrib><creatorcontrib>Wang, Qing</creatorcontrib><title>VTLayout: Fusion of Visual and Text Features for Document Layout Analysis</title><title>arXiv.org</title><description>Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting \emph{Figure} from documents, they are still unsatisfactory in recognizing the \emph{List}, \emph{Table}, \emph{Text} and \emph{Title} category blocks in DLA. This paper proposes a VTLayout model fusing the documents' deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.</description><subject>Computer vision</subject><subject>Feature extraction</subject><subject>Layouts</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNy8EKgkAQgOElCJLyHQY6C7rbqnSLSgo6ildZagXFdmpnB_LtC-oBOv2X75-JSCqVJeVGyoWIiYY0TWVeSK1VJM5NfTETcthCxdSjA-yg6YnNCMbdoLavAJU1gb0l6NDDAa98ty7A94OdM-NEPa3EvDMj2fjXpVhXx3p_Sh4en2wptAOy_2Bqpc6LMlOZytV_6g0z9Dvg</recordid><startdate>20210812</startdate><enddate>20210812</enddate><creator>Li, Shoubin</creator><creator>Ma, Xuyan</creator><creator>Pan, Shuaiqun</creator><creator>Hu, Jun</creator><creator>Shi, Lin</creator><creator>Wang, Qing</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210812</creationdate><title>VTLayout: Fusion of Visual and Text Features for Document Layout Analysis</title><author>Li, Shoubin ; Ma, Xuyan ; Pan, Shuaiqun ; Hu, Jun ; Shi, Lin ; Wang, Qing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25678131363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer vision</topic><topic>Feature extraction</topic><topic>Layouts</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Shoubin</creatorcontrib><creatorcontrib>Ma, Xuyan</creatorcontrib><creatorcontrib>Pan, Shuaiqun</creatorcontrib><creatorcontrib>Hu, Jun</creatorcontrib><creatorcontrib>Shi, Lin</creatorcontrib><creatorcontrib>Wang, Qing</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Shoubin</au><au>Ma, Xuyan</au><au>Pan, Shuaiqun</au><au>Hu, Jun</au><au>Shi, Lin</au><au>Wang, Qing</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>VTLayout: Fusion of Visual and Text Features for Document Layout Analysis</atitle><jtitle>arXiv.org</jtitle><date>2021-08-12</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Documents often contain complex physical structures, which make the Document Layout Analysis (DLA) task challenging. As a pre-processing step for content extraction, DLA has the potential to capture rich information in historical or scientific documents on a large scale. Although many deep-learning-based methods from computer vision have already achieved excellent performance in detecting \emph{Figure} from documents, they are still unsatisfactory in recognizing the \emph{List}, \emph{Table}, \emph{Text} and \emph{Title} category blocks in DLA. This paper proposes a VTLayout model fusing the documents' deep visual, shallow visual, and text features to localize and identify different category blocks. The model mainly includes two stages, and the three feature extractors are built in the second stage. In the first stage, the Cascade Mask R-CNN model is applied directly to localize all category blocks of the documents. In the second stage, the deep visual, shallow visual, and text features are extracted for fusion to identify the category blocks of documents. As a result, we strengthen the classification power of different category blocks based on the existing localization technique. The experimental results show that the identification capability of the VTLayout is superior to the most advanced method of DLA based on the PubLayNet dataset, and the F1 score is as high as 0.9599.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2567813136
source Free E- Journals
subjects Computer vision
Feature extraction
Layouts
Machine learning
title VTLayout: Fusion of Visual and Text Features for Document Layout Analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T18%3A25%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=VTLayout:%20Fusion%20of%20Visual%20and%20Text%20Features%20for%20Document%20Layout%20Analysis&rft.jtitle=arXiv.org&rft.au=Li,%20Shoubin&rft.date=2021-08-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2567813136%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2567813136&rft_id=info:pmid/&rfr_iscdi=true