Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection

Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintainin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2021-06, Vol.129 (6), p.1972-1992
Hauptverfasser: Liu, Yuliang, He, Tong, Chen, Hao, Wang, Xinyu, Luo, Canjie, Zhang, Shuaitao, Shen, Chunhua, Jin, Lianwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1992
container_issue 6
container_start_page 1972
container_title International journal of computer vision
container_volume 129
creator Liu, Yuliang
He, Tong
Chen, Hao
Wang, Xinyu
Luo, Canjie
Zhang, Shuaitao
Shen, Chunhua
Jin, Lianwen
description Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards , further demonstrating its superior performance. The code is available at https://git.io/TextDet .
doi_str_mv 10.1007/s11263-021-01459-7
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2531422310</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A662899824</galeid><sourcerecordid>A662899824</sourcerecordid><originalsourceid>FETCH-LOGICAL-c458t-39a0cab0475aa7c3644fe600bbb865c277ea3fb86fdec3474cbea5b443c87b2a3</originalsourceid><addsrcrecordid>eNp9kU9PHCEYh0nTJt3afoGeSDx5QPk7zBx13bYmtibVngnDvqzYcRiBjWs_fVmnifHScIC8PM_7Qn4IfWb0mFGqTzJjvBGEckYok6oj-g1aMKUFYZKqt2hBO06Jajr2Hn3I-Y5SylsuFmhY7aYhpjBucLkFvLSTdaE84eixHfFVWkMaIGd8Fnf4PGSXoIQ_toQ44h9QHmP6jX1M-Pt2KIHUPjCW-fbawQj4BnYFn0MBty9-RO-8HTJ8-rcfoF9fVjfLb-Ty6uvF8vSSOKnaQkRnqbM9lVpZq51opPTQUNr3fdsox7UGK3w9-zU4IbV0PVjVSylcq3tuxQE6nPtOKT5sIRdzF7dprCMNV4JJzgWjlTqeqY0dwITRx5Ksq2sN98HFEXyo9dOm4W3XtVxW4eiVUJlS_7ex25zNxfXP1yyfWZdizgm8mVK4t-nJMGr2kZk5MlMjM8-RGV0lMUt52icC6eXd_7H-Ao6Omcs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2531422310</pqid></control><display><type>article</type><title>Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection</title><source>SpringerLink Journals - AutoHoldings</source><creator>Liu, Yuliang ; He, Tong ; Chen, Hao ; Wang, Xinyu ; Luo, Canjie ; Zhang, Shuaitao ; Shen, Chunhua ; Jin, Lianwen</creator><creatorcontrib>Liu, Yuliang ; He, Tong ; Chen, Hao ; Wang, Xinyu ; Luo, Canjie ; Zhang, Shuaitao ; Shen, Chunhua ; Jin, Lianwen</creatorcontrib><description>Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards , further demonstrating its superior performance. The code is available at https://git.io/TextDet .</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01459-7</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Ablation ; Analysis ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Detectors ; Discretization ; Graph theory ; Image Processing and Computer Vision ; Impact analysis ; Pattern Recognition ; Pattern Recognition and Graphics ; Quadrilaterals ; Special Issue on Computer Vision in the Wild ; Vertical orientation ; Vision</subject><ispartof>International journal of computer vision, 2021-06, Vol.129 (6), p.1972-1992</ispartof><rights>Crown 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>Crown 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c458t-39a0cab0475aa7c3644fe600bbb865c277ea3fb86fdec3474cbea5b443c87b2a3</citedby><cites>FETCH-LOGICAL-c458t-39a0cab0475aa7c3644fe600bbb865c277ea3fb86fdec3474cbea5b443c87b2a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-021-01459-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-021-01459-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Liu, Yuliang</creatorcontrib><creatorcontrib>He, Tong</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Wang, Xinyu</creatorcontrib><creatorcontrib>Luo, Canjie</creatorcontrib><creatorcontrib>Zhang, Shuaitao</creatorcontrib><creatorcontrib>Shen, Chunhua</creatorcontrib><creatorcontrib>Jin, Lianwen</creatorcontrib><title>Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards , further demonstrating its superior performance. The code is available at https://git.io/TextDet .</description><subject>Ablation</subject><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Detectors</subject><subject>Discretization</subject><subject>Graph theory</subject><subject>Image Processing and Computer Vision</subject><subject>Impact analysis</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Quadrilaterals</subject><subject>Special Issue on Computer Vision in the Wild</subject><subject>Vertical orientation</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kU9PHCEYh0nTJt3afoGeSDx5QPk7zBx13bYmtibVngnDvqzYcRiBjWs_fVmnifHScIC8PM_7Qn4IfWb0mFGqTzJjvBGEckYok6oj-g1aMKUFYZKqt2hBO06Jajr2Hn3I-Y5SylsuFmhY7aYhpjBucLkFvLSTdaE84eixHfFVWkMaIGd8Fnf4PGSXoIQ_toQ44h9QHmP6jX1M-Pt2KIHUPjCW-fbawQj4BnYFn0MBty9-RO-8HTJ8-rcfoF9fVjfLb-Ty6uvF8vSSOKnaQkRnqbM9lVpZq51opPTQUNr3fdsox7UGK3w9-zU4IbV0PVjVSylcq3tuxQE6nPtOKT5sIRdzF7dprCMNV4JJzgWjlTqeqY0dwITRx5Ksq2sN98HFEXyo9dOm4W3XtVxW4eiVUJlS_7ex25zNxfXP1yyfWZdizgm8mVK4t-nJMGr2kZk5MlMjM8-RGV0lMUt52icC6eXd_7H-Ao6Omcs</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Liu, Yuliang</creator><creator>He, Tong</creator><creator>Chen, Hao</creator><creator>Wang, Xinyu</creator><creator>Luo, Canjie</creator><creator>Zhang, Shuaitao</creator><creator>Shen, Chunhua</creator><creator>Jin, Lianwen</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20210601</creationdate><title>Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection</title><author>Liu, Yuliang ; He, Tong ; Chen, Hao ; Wang, Xinyu ; Luo, Canjie ; Zhang, Shuaitao ; Shen, Chunhua ; Jin, Lianwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c458t-39a0cab0475aa7c3644fe600bbb865c277ea3fb86fdec3474cbea5b443c87b2a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Ablation</topic><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Detectors</topic><topic>Discretization</topic><topic>Graph theory</topic><topic>Image Processing and Computer Vision</topic><topic>Impact analysis</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Quadrilaterals</topic><topic>Special Issue on Computer Vision in the Wild</topic><topic>Vertical orientation</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yuliang</creatorcontrib><creatorcontrib>He, Tong</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Wang, Xinyu</creatorcontrib><creatorcontrib>Luo, Canjie</creatorcontrib><creatorcontrib>Zhang, Shuaitao</creatorcontrib><creatorcontrib>Shen, Chunhua</creatorcontrib><creatorcontrib>Jin, Lianwen</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yuliang</au><au>He, Tong</au><au>Chen, Hao</au><au>Wang, Xinyu</au><au>Luo, Canjie</au><au>Zhang, Shuaitao</au><au>Shen, Chunhua</au><au>Jin, Lianwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-06-01</date><risdate>2021</risdate><volume>129</volume><issue>6</issue><spage>1972</spage><epage>1992</epage><pages>1972-1992</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards , further demonstrating its superior performance. The code is available at https://git.io/TextDet .</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01459-7</doi><tpages>21</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2021-06, Vol.129 (6), p.1972-1992
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2531422310
source SpringerLink Journals - AutoHoldings
subjects Ablation
Analysis
Artificial Intelligence
Computer Imaging
Computer Science
Detectors
Discretization
Graph theory
Image Processing and Computer Vision
Impact analysis
Pattern Recognition
Pattern Recognition and Graphics
Quadrilaterals
Special Issue on Computer Vision in the Wild
Vertical orientation
Vision
title Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T22%3A52%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exploring%20the%20Capacity%20of%20an%20Orderless%20Box%20Discretization%20Network%20for%20Multi-orientation%20Scene%20Text%20Detection&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Liu,%20Yuliang&rft.date=2021-06-01&rft.volume=129&rft.issue=6&rft.spage=1972&rft.epage=1992&rft.pages=1972-1992&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01459-7&rft_dat=%3Cgale_proqu%3EA662899824%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2531422310&rft_id=info:pmid/&rft_galeid=A662899824&rfr_iscdi=true