PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition

Handwritten Chinese text recognition (HCTR) has been an active research topic for decades. However, most previous studies solely focus on the recognition of cropped text line images, ignoring the error caused by text line detection in real-world applications. Although some approaches aimed at page-l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2022-11, Vol.130 (11), p.2623-2645
Hauptverfasser: Peng, Dezhi, Jin, Lianwen, Liu, Yuliang, Luo, Canjie, Lai, Songxuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2645
container_issue 11
container_start_page 2623
container_title International journal of computer vision
container_volume 130
creator Peng, Dezhi
Jin, Lianwen
Liu, Yuliang
Luo, Canjie
Lai, Songxuan
description Handwritten Chinese text recognition (HCTR) has been an active research topic for decades. However, most previous studies solely focus on the recognition of cropped text line images, ignoring the error caused by text line detection in real-world applications. Although some approaches aimed at page-level text recognition have been proposed in recent years, they either are limited to simple layouts or require very detailed annotations including expensive line-level and even character-level bounding boxes. To this end, we propose PageNet for end-to-end weakly supervised page-level HCTR. PageNet detects and recognizes characters and predicts the reading order between them, which is more robust and flexible when dealing with complex layouts including multi-directional and curved text lines. Utilizing the proposed weakly supervised learning framework, PageNet requires only transcripts to be annotated for real data; however, it can still output detection and recognition results at both the character and line levels, avoiding the labor and cost of labeling bounding boxes of characters and text lines. Extensive experiments conducted on five datasets demonstrate the superiority of PageNet over existing weakly supervised and fully supervised page-level methods. These experimental results may spark further research beyond the realms of existing methods based on connectionist temporal classification or attention. The source code is available at https://github.com/shannanyinxiang/PageNet .
doi_str_mv 10.1007/s11263-022-01654-0
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2719604727</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A720452865</galeid><sourcerecordid>A720452865</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-2c58b573e6bce423ec528243d0b5e4172b9ae112ddaf0270aae83e2c9095e3633</originalsourceid><addsrcrecordid>eNp9kU9rGzEQxUVpoG6aL9CToKcelIyk1a63t2DyD0xaEoecgpC1s1uljuRKchx_-yjZQsklzGFg-L2ZNzxCvnI45ADNUeJc1JKBEAx4rSoGH8iEq0YyXoH6SCbQCmCqbvkn8jmlewAQUyEn5O6XGfAS8w-6CFsTu0RPfMdyYKXRWzR_Vjt6vVljfHQJO_pCszk-4oqeG99to8sZPZ39dh4T0gU-ZXqFNgzeZRf8F7LXm1XCg399n9ycnixm52z-8-xidjxnVrYiM2HVdFnMYr20WAmJVhV3lexgqbDijVi2BsuHXWd6EA0Yg1OJwrbQKpS1lPvk27h3HcPfDaas78Mm-nJSi4a3NVSNaAp1OFKDWaF2vg85Gluqwwdng8felflxI6Aq92tVBN_fCAqTy4eD2aSkL66v3rJiZG0MKUXs9Tq6BxN3moN-yUiPGemSkX7NSEMRyVGUCuwHjP99v6N6BrIDkjs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2719604727</pqid></control><display><type>article</type><title>PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition</title><source>Springer Nature - Complete Springer Journals</source><creator>Peng, Dezhi ; Jin, Lianwen ; Liu, Yuliang ; Luo, Canjie ; Lai, Songxuan</creator><creatorcontrib>Peng, Dezhi ; Jin, Lianwen ; Liu, Yuliang ; Luo, Canjie ; Lai, Songxuan</creatorcontrib><description>Handwritten Chinese text recognition (HCTR) has been an active research topic for decades. However, most previous studies solely focus on the recognition of cropped text line images, ignoring the error caused by text line detection in real-world applications. Although some approaches aimed at page-level text recognition have been proposed in recent years, they either are limited to simple layouts or require very detailed annotations including expensive line-level and even character-level bounding boxes. To this end, we propose PageNet for end-to-end weakly supervised page-level HCTR. PageNet detects and recognizes characters and predicts the reading order between them, which is more robust and flexible when dealing with complex layouts including multi-directional and curved text lines. Utilizing the proposed weakly supervised learning framework, PageNet requires only transcripts to be annotated for real data; however, it can still output detection and recognition results at both the character and line levels, avoiding the labor and cost of labeling bounding boxes of characters and text lines. Extensive experiments conducted on five datasets demonstrate the superiority of PageNet over existing weakly supervised and fully supervised page-level methods. These experimental results may spark further research beyond the realms of existing methods based on connectionist temporal classification or attention. The source code is available at https://github.com/shannanyinxiang/PageNet .</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-022-01654-0</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Annotations ; Artificial Intelligence ; Boxes ; Character recognition ; Computer Imaging ; Computer Science ; Handwriting recognition ; Image Processing and Computer Vision ; Laboratories ; Layouts ; Methods ; Object recognition ; Pattern Recognition ; Pattern Recognition and Graphics ; Reading ; Source code ; Supervision ; Vision</subject><ispartof>International journal of computer vision, 2022-11, Vol.130 (11), p.2623-2645</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>COPYRIGHT 2022 Springer</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-2c58b573e6bce423ec528243d0b5e4172b9ae112ddaf0270aae83e2c9095e3633</citedby><cites>FETCH-LOGICAL-c392t-2c58b573e6bce423ec528243d0b5e4172b9ae112ddaf0270aae83e2c9095e3633</cites><orcidid>0000-0002-5456-0957</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-022-01654-0$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-022-01654-0$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Peng, Dezhi</creatorcontrib><creatorcontrib>Jin, Lianwen</creatorcontrib><creatorcontrib>Liu, Yuliang</creatorcontrib><creatorcontrib>Luo, Canjie</creatorcontrib><creatorcontrib>Lai, Songxuan</creatorcontrib><title>PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Handwritten Chinese text recognition (HCTR) has been an active research topic for decades. However, most previous studies solely focus on the recognition of cropped text line images, ignoring the error caused by text line detection in real-world applications. Although some approaches aimed at page-level text recognition have been proposed in recent years, they either are limited to simple layouts or require very detailed annotations including expensive line-level and even character-level bounding boxes. To this end, we propose PageNet for end-to-end weakly supervised page-level HCTR. PageNet detects and recognizes characters and predicts the reading order between them, which is more robust and flexible when dealing with complex layouts including multi-directional and curved text lines. Utilizing the proposed weakly supervised learning framework, PageNet requires only transcripts to be annotated for real data; however, it can still output detection and recognition results at both the character and line levels, avoiding the labor and cost of labeling bounding boxes of characters and text lines. Extensive experiments conducted on five datasets demonstrate the superiority of PageNet over existing weakly supervised and fully supervised page-level methods. These experimental results may spark further research beyond the realms of existing methods based on connectionist temporal classification or attention. The source code is available at https://github.com/shannanyinxiang/PageNet .</description><subject>Accuracy</subject><subject>Annotations</subject><subject>Artificial Intelligence</subject><subject>Boxes</subject><subject>Character recognition</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Handwriting recognition</subject><subject>Image Processing and Computer Vision</subject><subject>Laboratories</subject><subject>Layouts</subject><subject>Methods</subject><subject>Object recognition</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Reading</subject><subject>Source code</subject><subject>Supervision</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kU9rGzEQxUVpoG6aL9CToKcelIyk1a63t2DyD0xaEoecgpC1s1uljuRKchx_-yjZQsklzGFg-L2ZNzxCvnI45ADNUeJc1JKBEAx4rSoGH8iEq0YyXoH6SCbQCmCqbvkn8jmlewAQUyEn5O6XGfAS8w-6CFsTu0RPfMdyYKXRWzR_Vjt6vVljfHQJO_pCszk-4oqeG99to8sZPZ39dh4T0gU-ZXqFNgzeZRf8F7LXm1XCg399n9ycnixm52z-8-xidjxnVrYiM2HVdFnMYr20WAmJVhV3lexgqbDijVi2BsuHXWd6EA0Yg1OJwrbQKpS1lPvk27h3HcPfDaas78Mm-nJSi4a3NVSNaAp1OFKDWaF2vg85Gluqwwdng8felflxI6Aq92tVBN_fCAqTy4eD2aSkL66v3rJiZG0MKUXs9Tq6BxN3moN-yUiPGemSkX7NSEMRyVGUCuwHjP99v6N6BrIDkjs</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Peng, Dezhi</creator><creator>Jin, Lianwen</creator><creator>Liu, Yuliang</creator><creator>Luo, Canjie</creator><creator>Lai, Songxuan</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-5456-0957</orcidid></search><sort><creationdate>20221101</creationdate><title>PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition</title><author>Peng, Dezhi ; Jin, Lianwen ; Liu, Yuliang ; Luo, Canjie ; Lai, Songxuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-2c58b573e6bce423ec528243d0b5e4172b9ae112ddaf0270aae83e2c9095e3633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Annotations</topic><topic>Artificial Intelligence</topic><topic>Boxes</topic><topic>Character recognition</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Handwriting recognition</topic><topic>Image Processing and Computer Vision</topic><topic>Laboratories</topic><topic>Layouts</topic><topic>Methods</topic><topic>Object recognition</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Reading</topic><topic>Source code</topic><topic>Supervision</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Peng, Dezhi</creatorcontrib><creatorcontrib>Jin, Lianwen</creatorcontrib><creatorcontrib>Liu, Yuliang</creatorcontrib><creatorcontrib>Luo, Canjie</creatorcontrib><creatorcontrib>Lai, Songxuan</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peng, Dezhi</au><au>Jin, Lianwen</au><au>Liu, Yuliang</au><au>Luo, Canjie</au><au>Lai, Songxuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>130</volume><issue>11</issue><spage>2623</spage><epage>2645</epage><pages>2623-2645</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Handwritten Chinese text recognition (HCTR) has been an active research topic for decades. However, most previous studies solely focus on the recognition of cropped text line images, ignoring the error caused by text line detection in real-world applications. Although some approaches aimed at page-level text recognition have been proposed in recent years, they either are limited to simple layouts or require very detailed annotations including expensive line-level and even character-level bounding boxes. To this end, we propose PageNet for end-to-end weakly supervised page-level HCTR. PageNet detects and recognizes characters and predicts the reading order between them, which is more robust and flexible when dealing with complex layouts including multi-directional and curved text lines. Utilizing the proposed weakly supervised learning framework, PageNet requires only transcripts to be annotated for real data; however, it can still output detection and recognition results at both the character and line levels, avoiding the labor and cost of labeling bounding boxes of characters and text lines. Extensive experiments conducted on five datasets demonstrate the superiority of PageNet over existing weakly supervised and fully supervised page-level methods. These experimental results may spark further research beyond the realms of existing methods based on connectionist temporal classification or attention. The source code is available at https://github.com/shannanyinxiang/PageNet .</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-022-01654-0</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0002-5456-0957</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2022-11, Vol.130 (11), p.2623-2645
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2719604727
source Springer Nature - Complete Springer Journals
subjects Accuracy
Annotations
Artificial Intelligence
Boxes
Character recognition
Computer Imaging
Computer Science
Handwriting recognition
Image Processing and Computer Vision
Laboratories
Layouts
Methods
Object recognition
Pattern Recognition
Pattern Recognition and Graphics
Reading
Source code
Supervision
Vision
title PageNet: Towards End-to-End Weakly Supervised Page-Level Handwritten Chinese Text Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T00%3A33%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PageNet:%20Towards%20End-to-End%20Weakly%20Supervised%20Page-Level%20Handwritten%20Chinese%20Text%20Recognition&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Peng,%20Dezhi&rft.date=2022-11-01&rft.volume=130&rft.issue=11&rft.spage=2623&rft.epage=2645&rft.pages=2623-2645&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-022-01654-0&rft_dat=%3Cgale_proqu%3EA720452865%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2719604727&rft_id=info:pmid/&rft_galeid=A720452865&rfr_iscdi=true