Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing

Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Boqiang, Xie, Hongtao, Gao, Zuan, Wang, Yuxin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Boqiang
Xie, Hongtao
Gao, Zuan
Wang, Yuxin
description Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performance. We propose a Disentangled Representation Learning framework (DARLING) aimed at disentangling these two types of features for improved adaptability in better addressing various downstream tasks (choose what you really need). Specifically, we synthesize a dataset of image pairs with identical style but different content. Based on the dataset, we decouple the two types of features by the supervision design. Clearly, we directly split the visual representation into style and content features, the content features are supervised by a text recognition loss, while an alignment loss aligns the style features in the image pairs. Then, style features are employed in reconstructing the counterpart image via an image decoder with a prompt that indicates the counterpart's content. Such an operation effectively decouples the features based on their distinctive properties. To the best of our knowledge, this is the first time in the field of scene text that disentangles the inherent properties of the text images. Our method achieves state-of-the-art performance in Scene Text Recognition, Removal, and Editing.
doi_str_mv 10.48550/arxiv.2405.04377
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_04377</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_04377</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-625d0faa30682368f6abee9f6266eff0759c66900e2fed7fc835c3be7316f9503</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mo8gK2Zpklad1LHHygKWhBX5U5y04l0kiGtw_j206muDufwceAj5GrJ0rwQgt1CPLh9muVMpCznSp2T72oTwoD0cwMj_Qo_9BXR3NEHN6AfwXc9GvqOu4hzH13wtEaI3vmO2hDph0aPtMHDOGE6dN6dmJupbMMeegre0JWZRt9dkDML_YCX_7kgzeOqqZ6T-u3ppbqvE5BKJTIThlkAzmSRcVlYCWvE0spMSrSWKVFqKUvGMLNolNUFF5qvUfGltKVgfEGu_25n2XYX3Rbib3uSbmdpfgQ73VLC</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing</title><source>arXiv.org</source><creator>Zhang, Boqiang ; Xie, Hongtao ; Gao, Zuan ; Wang, Yuxin</creator><creatorcontrib>Zhang, Boqiang ; Xie, Hongtao ; Gao, Zuan ; Wang, Yuxin</creatorcontrib><description>Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performance. We propose a Disentangled Representation Learning framework (DARLING) aimed at disentangling these two types of features for improved adaptability in better addressing various downstream tasks (choose what you really need). Specifically, we synthesize a dataset of image pairs with identical style but different content. Based on the dataset, we decouple the two types of features by the supervision design. Clearly, we directly split the visual representation into style and content features, the content features are supervised by a text recognition loss, while an alignment loss aligns the style features in the image pairs. Then, style features are employed in reconstructing the counterpart image via an image decoder with a prompt that indicates the counterpart's content. Such an operation effectively decouples the features based on their distinctive properties. To the best of our knowledge, this is the first time in the field of scene text that disentangles the inherent properties of the text images. Our method achieves state-of-the-art performance in Scene Text Recognition, Removal, and Editing.</description><identifier>DOI: 10.48550/arxiv.2405.04377</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.04377$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.04377$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Boqiang</creatorcontrib><creatorcontrib>Xie, Hongtao</creatorcontrib><creatorcontrib>Gao, Zuan</creatorcontrib><creatorcontrib>Wang, Yuxin</creatorcontrib><title>Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing</title><description>Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performance. We propose a Disentangled Representation Learning framework (DARLING) aimed at disentangling these two types of features for improved adaptability in better addressing various downstream tasks (choose what you really need). Specifically, we synthesize a dataset of image pairs with identical style but different content. Based on the dataset, we decouple the two types of features by the supervision design. Clearly, we directly split the visual representation into style and content features, the content features are supervised by a text recognition loss, while an alignment loss aligns the style features in the image pairs. Then, style features are employed in reconstructing the counterpart image via an image decoder with a prompt that indicates the counterpart's content. Such an operation effectively decouples the features based on their distinctive properties. To the best of our knowledge, this is the first time in the field of scene text that disentangles the inherent properties of the text images. Our method achieves state-of-the-art performance in Scene Text Recognition, Removal, and Editing.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mo8gK2Zpklad1LHHygKWhBX5U5y04l0kiGtw_j206muDufwceAj5GrJ0rwQgt1CPLh9muVMpCznSp2T72oTwoD0cwMj_Qo_9BXR3NEHN6AfwXc9GvqOu4hzH13wtEaI3vmO2hDph0aPtMHDOGE6dN6dmJupbMMeegre0JWZRt9dkDML_YCX_7kgzeOqqZ6T-u3ppbqvE5BKJTIThlkAzmSRcVlYCWvE0spMSrSWKVFqKUvGMLNolNUFF5qvUfGltKVgfEGu_25n2XYX3Rbib3uSbmdpfgQ73VLC</recordid><startdate>20240507</startdate><enddate>20240507</enddate><creator>Zhang, Boqiang</creator><creator>Xie, Hongtao</creator><creator>Gao, Zuan</creator><creator>Wang, Yuxin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240507</creationdate><title>Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing</title><author>Zhang, Boqiang ; Xie, Hongtao ; Gao, Zuan ; Wang, Yuxin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-625d0faa30682368f6abee9f6266eff0759c66900e2fed7fc835c3be7316f9503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Boqiang</creatorcontrib><creatorcontrib>Xie, Hongtao</creatorcontrib><creatorcontrib>Gao, Zuan</creatorcontrib><creatorcontrib>Wang, Yuxin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Boqiang</au><au>Xie, Hongtao</au><au>Gao, Zuan</au><au>Wang, Yuxin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing</atitle><date>2024-05-07</date><risdate>2024</risdate><abstract>Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performance. We propose a Disentangled Representation Learning framework (DARLING) aimed at disentangling these two types of features for improved adaptability in better addressing various downstream tasks (choose what you really need). Specifically, we synthesize a dataset of image pairs with identical style but different content. Based on the dataset, we decouple the two types of features by the supervision design. Clearly, we directly split the visual representation into style and content features, the content features are supervised by a text recognition loss, while an alignment loss aligns the style features in the image pairs. Then, style features are employed in reconstructing the counterpart image via an image decoder with a prompt that indicates the counterpart's content. Such an operation effectively decouples the features based on their distinctive properties. To the best of our knowledge, this is the first time in the field of scene text that disentangles the inherent properties of the text images. Our method achieves state-of-the-art performance in Scene Text Recognition, Removal, and Editing.</abstract><doi>10.48550/arxiv.2405.04377</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.04377
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_04377
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T03%3A01%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Choose%20What%20You%20Need:%20Disentangled%20Representation%20Learning%20for%20Scene%20Text%20Recognition,%20Removal%20and%20Editing&rft.au=Zhang,%20Boqiang&rft.date=2024-05-07&rft_id=info:doi/10.48550/arxiv.2405.04377&rft_dat=%3Carxiv_GOX%3E2405_04377%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true