Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing
Scene text images contain not only style information (font, background) but also content information (character, texture). Different scene text tasks need different information, but previous representation learning methods use tightly coupled features for all tasks, resulting in sub-optimal performa...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scene text images contain not only style information (font, background) but
also content information (character, texture). Different scene text tasks need
different information, but previous representation learning methods use tightly
coupled features for all tasks, resulting in sub-optimal performance. We
propose a Disentangled Representation Learning framework (DARLING) aimed at
disentangling these two types of features for improved adaptability in better
addressing various downstream tasks (choose what you really need).
Specifically, we synthesize a dataset of image pairs with identical style but
different content. Based on the dataset, we decouple the two types of features
by the supervision design. Clearly, we directly split the visual representation
into style and content features, the content features are supervised by a text
recognition loss, while an alignment loss aligns the style features in the
image pairs. Then, style features are employed in reconstructing the
counterpart image via an image decoder with a prompt that indicates the
counterpart's content. Such an operation effectively decouples the features
based on their distinctive properties. To the best of our knowledge, this is
the first time in the field of scene text that disentangles the inherent
properties of the text images. Our method achieves state-of-the-art performance
in Scene Text Recognition, Removal, and Editing. |
---|---|
DOI: | 10.48550/arxiv.2405.04377 |