A robust video text extraction method based on text traversing line and stroke connectivity

Automatic video-text extraction is an important field in video content comprehension. In this paper, a robust video-text extraction method is presented which can automatically extract horizontally aligned text with different languages. First, an unsupervised paradigm based on Haar wavelets is applie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Peng Tianqiang, Tian Pohuang, Li Bicheng
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1005
container_issue
container_start_page 1002
container_title
container_volume
creator Peng Tianqiang
Tian Pohuang
Li Bicheng
description Automatic video-text extraction is an important field in video content comprehension. In this paper, a robust video-text extraction method is presented which can automatically extract horizontally aligned text with different languages. First, an unsupervised paradigm based on Haar wavelets is applied to obtain candidate text region. Second, the traversing line with its aptitude spectrum is introduced and applied to create boundary for each text line. Last, traversing line of the maximum feature value in each refined text region is employed to seed key-points in strokes, from which region growth is performed to create binary text image. Experiments conducted with a variety of video sources show that the method is robust to text of various colors, fonts, sizes in complex image, and performs better than conventional methods.
doi_str_mv 10.1109/ICOSP.2008.4697297
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_4697297</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4697297</ieee_id><sourcerecordid>4697297</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-b36d21ea24c965fb600f9bcb9158b81d6c5dd6d2755408aa0da89412f626a8d3</originalsourceid><addsrcrecordid>eNpFUM1KAzEYjGjBtvYF9JIX2PX7skk2OZaitVCoYG8eSrLJarTdlU1c7Nu7asHDMAzzcxhCrhFyRNC3q8Xm6TFnACrnUpdMl2dkgpxxzrDU-vxfKLggY4aSZ4IxHJHJT0kDoiwvySzGNwAoUClZyDF5ntOutZ8x0T4439LkvxId0JkqhbahB59eW0etid7RQf_6g9v7Lobmhe5D46lpHI2pa989rdqm8UO1D-l4RUa12Uc_O_GUbO_vtouHbL1ZrhbzdRawFCmzhXQMvWG80lLUVgLU2lZWo1BWoZOVcG6IlEJwUMaAM0pzZLVk0ihXTMnN32zw3u8-unAw3XF3Oqn4BiKnWHE</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A robust video text extraction method based on text traversing line and stroke connectivity</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Peng Tianqiang ; Tian Pohuang ; Li Bicheng</creator><creatorcontrib>Peng Tianqiang ; Tian Pohuang ; Li Bicheng</creatorcontrib><description>Automatic video-text extraction is an important field in video content comprehension. In this paper, a robust video-text extraction method is presented which can automatically extract horizontally aligned text with different languages. First, an unsupervised paradigm based on Haar wavelets is applied to obtain candidate text region. Second, the traversing line with its aptitude spectrum is introduced and applied to create boundary for each text line. Last, traversing line of the maximum feature value in each refined text region is employed to seed key-points in strokes, from which region growth is performed to create binary text image. Experiments conducted with a variety of video sources show that the method is robust to text of various colors, fonts, sizes in complex image, and performs better than conventional methods.</description><identifier>ISSN: 2164-5221</identifier><identifier>ISBN: 1424421780</identifier><identifier>ISBN: 9781424421787</identifier><identifier>EISBN: 1424421799</identifier><identifier>EISBN: 9781424421794</identifier><identifier>DOI: 10.1109/ICOSP.2008.4697297</identifier><identifier>LCCN: 2008901167</identifier><language>eng</language><publisher>IEEE</publisher><subject>Data mining ; Feature extraction ; Filters ; Image color analysis ; Image edge detection ; Indexing ; Information science ; Internet ; Robustness ; Video compression</subject><ispartof>2008 9th International Conference on Signal Processing, 2008, p.1002-1005</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4697297$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4697297$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Peng Tianqiang</creatorcontrib><creatorcontrib>Tian Pohuang</creatorcontrib><creatorcontrib>Li Bicheng</creatorcontrib><title>A robust video text extraction method based on text traversing line and stroke connectivity</title><title>2008 9th International Conference on Signal Processing</title><addtitle>ICOSP</addtitle><description>Automatic video-text extraction is an important field in video content comprehension. In this paper, a robust video-text extraction method is presented which can automatically extract horizontally aligned text with different languages. First, an unsupervised paradigm based on Haar wavelets is applied to obtain candidate text region. Second, the traversing line with its aptitude spectrum is introduced and applied to create boundary for each text line. Last, traversing line of the maximum feature value in each refined text region is employed to seed key-points in strokes, from which region growth is performed to create binary text image. Experiments conducted with a variety of video sources show that the method is robust to text of various colors, fonts, sizes in complex image, and performs better than conventional methods.</description><subject>Data mining</subject><subject>Feature extraction</subject><subject>Filters</subject><subject>Image color analysis</subject><subject>Image edge detection</subject><subject>Indexing</subject><subject>Information science</subject><subject>Internet</subject><subject>Robustness</subject><subject>Video compression</subject><issn>2164-5221</issn><isbn>1424421780</isbn><isbn>9781424421787</isbn><isbn>1424421799</isbn><isbn>9781424421794</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2008</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpFUM1KAzEYjGjBtvYF9JIX2PX7skk2OZaitVCoYG8eSrLJarTdlU1c7Nu7asHDMAzzcxhCrhFyRNC3q8Xm6TFnACrnUpdMl2dkgpxxzrDU-vxfKLggY4aSZ4IxHJHJT0kDoiwvySzGNwAoUClZyDF5ntOutZ8x0T4439LkvxId0JkqhbahB59eW0etid7RQf_6g9v7Lobmhe5D46lpHI2pa989rdqm8UO1D-l4RUa12Uc_O_GUbO_vtouHbL1ZrhbzdRawFCmzhXQMvWG80lLUVgLU2lZWo1BWoZOVcG6IlEJwUMaAM0pzZLVk0ihXTMnN32zw3u8-unAw3XF3Oqn4BiKnWHE</recordid><startdate>200810</startdate><enddate>200810</enddate><creator>Peng Tianqiang</creator><creator>Tian Pohuang</creator><creator>Li Bicheng</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200810</creationdate><title>A robust video text extraction method based on text traversing line and stroke connectivity</title><author>Peng Tianqiang ; Tian Pohuang ; Li Bicheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-b36d21ea24c965fb600f9bcb9158b81d6c5dd6d2755408aa0da89412f626a8d3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Data mining</topic><topic>Feature extraction</topic><topic>Filters</topic><topic>Image color analysis</topic><topic>Image edge detection</topic><topic>Indexing</topic><topic>Information science</topic><topic>Internet</topic><topic>Robustness</topic><topic>Video compression</topic><toplevel>online_resources</toplevel><creatorcontrib>Peng Tianqiang</creatorcontrib><creatorcontrib>Tian Pohuang</creatorcontrib><creatorcontrib>Li Bicheng</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Peng Tianqiang</au><au>Tian Pohuang</au><au>Li Bicheng</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A robust video text extraction method based on text traversing line and stroke connectivity</atitle><btitle>2008 9th International Conference on Signal Processing</btitle><stitle>ICOSP</stitle><date>2008-10</date><risdate>2008</risdate><spage>1002</spage><epage>1005</epage><pages>1002-1005</pages><issn>2164-5221</issn><isbn>1424421780</isbn><isbn>9781424421787</isbn><eisbn>1424421799</eisbn><eisbn>9781424421794</eisbn><abstract>Automatic video-text extraction is an important field in video content comprehension. In this paper, a robust video-text extraction method is presented which can automatically extract horizontally aligned text with different languages. First, an unsupervised paradigm based on Haar wavelets is applied to obtain candidate text region. Second, the traversing line with its aptitude spectrum is introduced and applied to create boundary for each text line. Last, traversing line of the maximum feature value in each refined text region is employed to seed key-points in strokes, from which region growth is performed to create binary text image. Experiments conducted with a variety of video sources show that the method is robust to text of various colors, fonts, sizes in complex image, and performs better than conventional methods.</abstract><pub>IEEE</pub><doi>10.1109/ICOSP.2008.4697297</doi><tpages>4</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2164-5221
ispartof 2008 9th International Conference on Signal Processing, 2008, p.1002-1005
issn 2164-5221
language eng
recordid cdi_ieee_primary_4697297
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Data mining
Feature extraction
Filters
Image color analysis
Image edge detection
Indexing
Information science
Internet
Robustness
Video compression
title A robust video text extraction method based on text traversing line and stroke connectivity
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T10%3A37%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20robust%20video%20text%20extraction%20method%20based%20on%20text%20traversing%20line%20and%20stroke%20connectivity&rft.btitle=2008%209th%20International%20Conference%20on%20Signal%20Processing&rft.au=Peng%20Tianqiang&rft.date=2008-10&rft.spage=1002&rft.epage=1005&rft.pages=1002-1005&rft.issn=2164-5221&rft.isbn=1424421780&rft.isbn_list=9781424421787&rft_id=info:doi/10.1109/ICOSP.2008.4697297&rft_dat=%3Cieee_6IE%3E4697297%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1424421799&rft.eisbn_list=9781424421794&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=4697297&rfr_iscdi=true