How Control Information Influences Multilingual Text Image Generation and Editing?
Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control in...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-07 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zhang, Boqiang Gao, Zuan Qu, Yadong Xie, Hongtao |
description | Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset available at https://github.com/CyrilSterling/TextGen. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3081986588</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3081986588</sourcerecordid><originalsourceid>FETCH-proquest_journals_30819865883</originalsourceid><addsrcrecordid>eNqNyk8LgjAcxvERBEn5HgadhbmlrVMHsfTQJbzLyCmT-VvtD_XyM-oFdHq-8HwWKKKMpQnfUbpCsXMjIYTme5plLELXyjxxYcBbo3ENvbGT8MrAp3WQcJMOX4L2SisYgtC4kS-P60kMEp8lSPvVAjpcdsrP6LhBy15oJ-PfrtH2VDZFldyteQTpfDuaYGG-WkZ4euB5xjn7T70B-Rc_yQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3081986588</pqid></control><display><type>article</type><title>How Control Information Influences Multilingual Text Image Generation and Editing?</title><source>Free E- Journals</source><creator>Zhang, Boqiang ; Gao, Zuan ; Qu, Yadong ; Xie, Hongtao</creator><creatorcontrib>Zhang, Boqiang ; Gao, Zuan ; Qu, Yadong ; Xie, Hongtao</creatorcontrib><description>Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset available at https://github.com/CyrilSterling/TextGen.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Fourier analysis ; Image processing ; Image quality ; Noise control ; Noise generation</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Zhang, Boqiang</creatorcontrib><creatorcontrib>Gao, Zuan</creatorcontrib><creatorcontrib>Qu, Yadong</creatorcontrib><creatorcontrib>Xie, Hongtao</creatorcontrib><title>How Control Information Influences Multilingual Text Image Generation and Editing?</title><title>arXiv.org</title><description>Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset available at https://github.com/CyrilSterling/TextGen.</description><subject>Datasets</subject><subject>Fourier analysis</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Noise control</subject><subject>Noise generation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyk8LgjAcxvERBEn5HgadhbmlrVMHsfTQJbzLyCmT-VvtD_XyM-oFdHq-8HwWKKKMpQnfUbpCsXMjIYTme5plLELXyjxxYcBbo3ENvbGT8MrAp3WQcJMOX4L2SisYgtC4kS-P60kMEp8lSPvVAjpcdsrP6LhBy15oJ-PfrtH2VDZFldyteQTpfDuaYGG-WkZ4euB5xjn7T70B-Rc_yQ</recordid><startdate>20240721</startdate><enddate>20240721</enddate><creator>Zhang, Boqiang</creator><creator>Gao, Zuan</creator><creator>Qu, Yadong</creator><creator>Xie, Hongtao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240721</creationdate><title>How Control Information Influences Multilingual Text Image Generation and Editing?</title><author>Zhang, Boqiang ; Gao, Zuan ; Qu, Yadong ; Xie, Hongtao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30819865883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Fourier analysis</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Noise control</topic><topic>Noise generation</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Boqiang</creatorcontrib><creatorcontrib>Gao, Zuan</creatorcontrib><creatorcontrib>Qu, Yadong</creatorcontrib><creatorcontrib>Xie, Hongtao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Boqiang</au><au>Gao, Zuan</au><au>Qu, Yadong</au><au>Xie, Hongtao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>How Control Information Influences Multilingual Text Image Generation and Editing?</atitle><jtitle>arXiv.org</jtitle><date>2024-07-21</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset available at https://github.com/CyrilSterling/TextGen.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3081986588 |
source | Free E- Journals |
subjects | Datasets Fourier analysis Image processing Image quality Noise control Noise generation |
title | How Control Information Influences Multilingual Text Image Generation and Editing? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T21%3A02%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=How%20Control%20Information%20Influences%20Multilingual%20Text%20Image%20Generation%20and%20Editing?&rft.jtitle=arXiv.org&rft.au=Zhang,%20Boqiang&rft.date=2024-07-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3081986588%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3081986588&rft_id=info:pmid/&rfr_iscdi=true |