DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights

Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-12, Vol.35 (12), p.18090-18104
Hauptverfasser: Li, Wei, Gu, Chengchun, Chen, Jinlin, Ma, Chao, Zhang, Xiaowu, Chen, Bin, Chen, Ping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 18104
container_issue 12
container_start_page 18090
container_title IEEE transaction on neural networks and learning systems
container_volume 35
creator Li, Wei
Gu, Chengchun
Chen, Jinlin
Ma, Chao
Zhang, Xiaowu
Chen, Bin
Chen, Ping
description Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges. Kullback-Leibler (KL) divergence adopted by HistoGAN might bring the color-tone mismatching, because it is possible to provide infinite score to a generator. Moreover, only relying on distribution estimation also produces images with lower fidelity in HistoGAN. To address these issues, we propose a new approach, named dynamic weights GAN (DW-GAN). We use two discriminators to estimate the distribution matching degree and details' similarity, with Laplacian operator and Hinge loss. Laplacian operator can help capture more image details, while Hinge loss is deduced from mean difference (MD) that could avoid the case of infinite score. To synthesize desired images, we combine the loss of the two discriminators with generator loss and set the weights of the two estimated scores to be dynamic through the previous discriminators' outputs, given that the training signal of a generator is from a discriminator. Besides, we innovatively integrate the dynamic weights into other GAN variants (e.g., HistoGAN and StyleGAN) to show the improved performance. Finally, we conduct extensive experiments on one industrial Fabric and seven public datasets to demonstrate the significant performance of DW-GAN in producing higher fidelity images and achieving the lowest Frechet inception distance (FID) scores over SOTA baselines.
doi_str_mv 10.1109/TNNLS.2023.3311545
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10250864</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10250864</ieee_id><sourcerecordid>2864899610</sourcerecordid><originalsourceid>FETCH-LOGICAL-c275t-7a979a870ab3248ce5ece543e1d6af7d4b82b0ff7983ab3321175e0aaf5c03a3</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMoTub-gIjk0pvMfDRN693Y3AeMeWFl3pW0Pd0i7TKbDtm_N3NzGDgkhOd9DzwI3THaZ4zGT8liMX_rc8pFXwjGZCAv0A1nISdcRNHl-a0-Oqjn3Cf1J6QyDOJr1BFKUcFkfIPeR0syGSyecWK_dVPgqVmtydgUUJl2j4e2sg1J7AYctiX2IJnABhrdQoFntV75_6Vp13i03-ja5HgJPt-6W3RV6spB73R3UTJ-SYZTMn-dzIaDOcm5ki1ROlaxjhTVmeBBlIMEP4EAVoS6VEWQRTyjZaniSHhEcMaUBKp1KXMqtOiix2PttrFfO3BtWhuXQ1XpDdidS3kUBlEch4x6lB_RvLHONVCm28bUutmnjKYHoemv0PQgND0J9aGHU_8uq6E4R_70eeD-CBgA-NfIJfWrxQ9h0Xhn</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864899610</pqid></control><display><type>article</type><title>DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights</title><source>IEEE Electronic Library Online</source><creator>Li, Wei ; Gu, Chengchun ; Chen, Jinlin ; Ma, Chao ; Zhang, Xiaowu ; Chen, Bin ; Chen, Ping</creator><creatorcontrib>Li, Wei ; Gu, Chengchun ; Chen, Jinlin ; Ma, Chao ; Zhang, Xiaowu ; Chen, Bin ; Chen, Ping</creatorcontrib><description>Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges. Kullback-Leibler (KL) divergence adopted by HistoGAN might bring the color-tone mismatching, because it is possible to provide infinite score to a generator. Moreover, only relying on distribution estimation also produces images with lower fidelity in HistoGAN. To address these issues, we propose a new approach, named dynamic weights GAN (DW-GAN). We use two discriminators to estimate the distribution matching degree and details' similarity, with Laplacian operator and Hinge loss. Laplacian operator can help capture more image details, while Hinge loss is deduced from mean difference (MD) that could avoid the case of infinite score. To synthesize desired images, we combine the loss of the two discriminators with generator loss and set the weights of the two estimated scores to be dynamic through the previous discriminators' outputs, given that the training signal of a generator is from a discriminator. Besides, we innovatively integrate the dynamic weights into other GAN variants (e.g., HistoGAN and StyleGAN) to show the improved performance. Finally, we conduct extensive experiments on one industrial Fabric and seven public datasets to demonstrate the significant performance of DW-GAN in producing higher fidelity images and achieving the lowest Frechet inception distance (FID) scores over SOTA baselines.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2023.3311545</identifier><identifier>PMID: 37703159</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Color-tone changing ; dynamic weights ; generative adversarial net (GAN) ; Generative adversarial networks ; Generators ; Histograms ; Image color analysis ; Image edge detection ; Laplace equations ; Laplacian ; Training</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-12, Vol.35 (12), p.18090-18104</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c275t-7a979a870ab3248ce5ece543e1d6af7d4b82b0ff7983ab3321175e0aaf5c03a3</cites><orcidid>0000-0003-3789-7686 ; 0000-0003-3923-8844 ; 0000-0002-7443-6267 ; 0000-0002-3135-0447</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10250864$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10250864$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37703159$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Gu, Chengchun</creatorcontrib><creatorcontrib>Chen, Jinlin</creatorcontrib><creatorcontrib>Ma, Chao</creatorcontrib><creatorcontrib>Zhang, Xiaowu</creatorcontrib><creatorcontrib>Chen, Bin</creatorcontrib><creatorcontrib>Chen, Ping</creatorcontrib><title>DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges. Kullback-Leibler (KL) divergence adopted by HistoGAN might bring the color-tone mismatching, because it is possible to provide infinite score to a generator. Moreover, only relying on distribution estimation also produces images with lower fidelity in HistoGAN. To address these issues, we propose a new approach, named dynamic weights GAN (DW-GAN). We use two discriminators to estimate the distribution matching degree and details' similarity, with Laplacian operator and Hinge loss. Laplacian operator can help capture more image details, while Hinge loss is deduced from mean difference (MD) that could avoid the case of infinite score. To synthesize desired images, we combine the loss of the two discriminators with generator loss and set the weights of the two estimated scores to be dynamic through the previous discriminators' outputs, given that the training signal of a generator is from a discriminator. Besides, we innovatively integrate the dynamic weights into other GAN variants (e.g., HistoGAN and StyleGAN) to show the improved performance. Finally, we conduct extensive experiments on one industrial Fabric and seven public datasets to demonstrate the significant performance of DW-GAN in producing higher fidelity images and achieving the lowest Frechet inception distance (FID) scores over SOTA baselines.</description><subject>Color-tone changing</subject><subject>dynamic weights</subject><subject>generative adversarial net (GAN)</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Histograms</subject><subject>Image color analysis</subject><subject>Image edge detection</subject><subject>Laplace equations</subject><subject>Laplacian</subject><subject>Training</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkF1LwzAUhoMoTub-gIjk0pvMfDRN693Y3AeMeWFl3pW0Pd0i7TKbDtm_N3NzGDgkhOd9DzwI3THaZ4zGT8liMX_rc8pFXwjGZCAv0A1nISdcRNHl-a0-Oqjn3Cf1J6QyDOJr1BFKUcFkfIPeR0syGSyecWK_dVPgqVmtydgUUJl2j4e2sg1J7AYctiX2IJnABhrdQoFntV75_6Vp13i03-ja5HgJPt-6W3RV6spB73R3UTJ-SYZTMn-dzIaDOcm5ki1ROlaxjhTVmeBBlIMEP4EAVoS6VEWQRTyjZaniSHhEcMaUBKp1KXMqtOiix2PttrFfO3BtWhuXQ1XpDdidS3kUBlEch4x6lB_RvLHONVCm28bUutmnjKYHoemv0PQgND0J9aGHU_8uq6E4R_70eeD-CBgA-NfIJfWrxQ9h0Xhn</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Li, Wei</creator><creator>Gu, Chengchun</creator><creator>Chen, Jinlin</creator><creator>Ma, Chao</creator><creator>Zhang, Xiaowu</creator><creator>Chen, Bin</creator><creator>Chen, Ping</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3789-7686</orcidid><orcidid>https://orcid.org/0000-0003-3923-8844</orcidid><orcidid>https://orcid.org/0000-0002-7443-6267</orcidid><orcidid>https://orcid.org/0000-0002-3135-0447</orcidid></search><sort><creationdate>20241201</creationdate><title>DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights</title><author>Li, Wei ; Gu, Chengchun ; Chen, Jinlin ; Ma, Chao ; Zhang, Xiaowu ; Chen, Bin ; Chen, Ping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c275t-7a979a870ab3248ce5ece543e1d6af7d4b82b0ff7983ab3321175e0aaf5c03a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Color-tone changing</topic><topic>dynamic weights</topic><topic>generative adversarial net (GAN)</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Histograms</topic><topic>Image color analysis</topic><topic>Image edge detection</topic><topic>Laplace equations</topic><topic>Laplacian</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Gu, Chengchun</creatorcontrib><creatorcontrib>Chen, Jinlin</creatorcontrib><creatorcontrib>Ma, Chao</creatorcontrib><creatorcontrib>Zhang, Xiaowu</creatorcontrib><creatorcontrib>Chen, Bin</creatorcontrib><creatorcontrib>Chen, Ping</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Wei</au><au>Gu, Chengchun</au><au>Chen, Jinlin</au><au>Ma, Chao</au><au>Zhang, Xiaowu</au><au>Chen, Bin</au><au>Chen, Ping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-12-01</date><risdate>2024</risdate><volume>35</volume><issue>12</issue><spage>18090</spage><epage>18104</epage><pages>18090-18104</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges. Kullback-Leibler (KL) divergence adopted by HistoGAN might bring the color-tone mismatching, because it is possible to provide infinite score to a generator. Moreover, only relying on distribution estimation also produces images with lower fidelity in HistoGAN. To address these issues, we propose a new approach, named dynamic weights GAN (DW-GAN). We use two discriminators to estimate the distribution matching degree and details' similarity, with Laplacian operator and Hinge loss. Laplacian operator can help capture more image details, while Hinge loss is deduced from mean difference (MD) that could avoid the case of infinite score. To synthesize desired images, we combine the loss of the two discriminators with generator loss and set the weights of the two estimated scores to be dynamic through the previous discriminators' outputs, given that the training signal of a generator is from a discriminator. Besides, we innovatively integrate the dynamic weights into other GAN variants (e.g., HistoGAN and StyleGAN) to show the improved performance. Finally, we conduct extensive experiments on one industrial Fabric and seven public datasets to demonstrate the significant performance of DW-GAN in producing higher fidelity images and achieving the lowest Frechet inception distance (FID) scores over SOTA baselines.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37703159</pmid><doi>10.1109/TNNLS.2023.3311545</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-3789-7686</orcidid><orcidid>https://orcid.org/0000-0003-3923-8844</orcidid><orcidid>https://orcid.org/0000-0002-7443-6267</orcidid><orcidid>https://orcid.org/0000-0002-3135-0447</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-12, Vol.35 (12), p.18090-18104
issn 2162-237X
2162-2388
language eng
recordid cdi_ieee_primary_10250864
source IEEE Electronic Library Online
subjects Color-tone changing
dynamic weights
generative adversarial net (GAN)
Generative adversarial networks
Generators
Histograms
Image color analysis
Image edge detection
Laplace equations
Laplacian
Training
title DW-GAN: Toward High-Fidelity Color-Tones of GAN-Generated Images With Dynamic Weights
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T17%3A43%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DW-GAN:%20Toward%20High-Fidelity%20Color-Tones%20of%20GAN-Generated%20Images%20With%20Dynamic%20Weights&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Li,%20Wei&rft.date=2024-12-01&rft.volume=35&rft.issue=12&rft.spage=18090&rft.epage=18104&rft.pages=18090-18104&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2023.3311545&rft_dat=%3Cproquest_RIE%3E2864899610%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864899610&rft_id=info:pmid/37703159&rft_ieee_id=10250864&rfr_iscdi=true