SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer
The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain {X} to target domain {Y} in the absence of paired examples...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2021, Vol.30, p.374-385 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 385 |
---|---|
container_issue | |
container_start_page | 374 |
container_title | IEEE transactions on image processing |
container_volume | 30 |
creator | Li, Ru Wu, Chi-Hao Liu, Shuaicheng Wang, Jue Wang, Guangfu Liu, Guanghui Zeng, Bing |
description | The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain {X} to target domain {Y} in the absence of paired examples. However, such a translation cannot guarantee to generate high perceptual quality results. Existing style transfer methods work well with relatively uniform content, they often fail to capture geometric or structural patterns that always belong to salient regions. Detail losses in structured regions and undesired artifacts in smooth regions are unavoidable even if each individual region is correctly transferred into the target style. In this paper, we propose SDP-GAN, a GAN-based network for solving such problems while generating enjoyable style transfer results. We introduce a saliency network, which is trained with the generator simultaneously. The saliency network has two functions: (1) providing constraints for content loss to increase punishment for salient regions, and (2) supplying saliency features to generator to produce coherent results. Moreover, two novel losses are proposed to optimize the generator and saliency networks. The proposed method preserves the details on important salient regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against several leading prior methods demonstrates the superiority of our method. |
doi_str_mv | 10.1109/TIP.2020.3036754 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2465443902</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9259251</ieee_id><sourcerecordid>2465443902</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-ea6e8e3fa3ca661db7546b7f56a5a37f41340340fefa63df4c084c0e545166403</originalsourceid><addsrcrecordid>eNpdkd9LwzAQx4Mobk7fBV8CvvjSmTRp2vo2Nt0GY1Y2n0vWXTSza2vSTvrfm7Lhg3C_4D533PFF6JaSIaUkflzPk6FPfDJkhIkw4GeoT2NOPUK4f-5qEoReSHncQ1fW7gihPKDiEvUYo5GglPZRtZok3nS0fMIrmWsoshZPoJY6x4kBC-Yga10WeAoFGFceAI-2BzBWGi1zvIT6pzRfFqvS4Jn--MQJmAyqunHNNxd03eJV3eaA10YWVoG5RhdK5hZuTnmA3l-e1-OZt3idzsejhZcxn9ceSAERMCVZJoWg2437TmxCFQgZSBYqThknzhQoKdhW8YxEziHoPhSuNUAPx72VKb8bsHW61zaDPJcFlI1NfS5IKPw48h16_w_dlY0p3HUdFXDOYtJR5EhlprTWgEoro_fStCklaadG6tRIOzXSkxpu5O44ogHgD4_9wBllvwzIg4c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2465443902</pqid></control><display><type>article</type><title>SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Ru ; Wu, Chi-Hao ; Liu, Shuaicheng ; Wang, Jue ; Wang, Guangfu ; Liu, Guanghui ; Zeng, Bing</creator><creatorcontrib>Li, Ru ; Wu, Chi-Hao ; Liu, Shuaicheng ; Wang, Jue ; Wang, Guangfu ; Liu, Guanghui ; Zeng, Bing</creatorcontrib><description><![CDATA[The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain <inline-formula> <tex-math notation="LaTeX">{X} </tex-math></inline-formula> to target domain <inline-formula> <tex-math notation="LaTeX">{Y} </tex-math></inline-formula> in the absence of paired examples. However, such a translation cannot guarantee to generate high perceptual quality results. Existing style transfer methods work well with relatively uniform content, they often fail to capture geometric or structural patterns that always belong to salient regions. Detail losses in structured regions and undesired artifacts in smooth regions are unavoidable even if each individual region is correctly transferred into the target style. In this paper, we propose SDP-GAN, a GAN-based network for solving such problems while generating enjoyable style transfer results. We introduce a saliency network, which is trained with the generator simultaneously. The saliency network has two functions: (1) providing constraints for content loss to increase punishment for salient regions, and (2) supplying saliency features to generator to produce coherent results. Moreover, two novel losses are proposed to optimize the generator and saliency networks. The proposed method preserves the details on important salient regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against several leading prior methods demonstrates the superiority of our method.]]></description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2020.3036754</identifier><identifier>PMID: 33186111</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>detail preservation ; Domains ; Gallium nitride ; Generative adversarial network ; Generative adversarial networks ; Generators ; Image quality ; Linear programming ; Salience ; Saliency detection ; Streaming media ; style transfer ; Task analysis</subject><ispartof>IEEE transactions on image processing, 2021, Vol.30, p.374-385</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-ea6e8e3fa3ca661db7546b7f56a5a37f41340340fefa63df4c084c0e545166403</citedby><cites>FETCH-LOGICAL-c324t-ea6e8e3fa3ca661db7546b7f56a5a37f41340340fefa63df4c084c0e545166403</cites><orcidid>0000-0002-4170-4552 ; 0000-0002-7249-6848 ; 0000-0002-3641-3136 ; 0000-0002-3704-9065 ; 0000-0002-4491-7967 ; 0000-0002-3085-7422 ; 0000-0002-8815-5335</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9259251$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4009,27902,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9259251$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Ru</creatorcontrib><creatorcontrib>Wu, Chi-Hao</creatorcontrib><creatorcontrib>Liu, Shuaicheng</creatorcontrib><creatorcontrib>Wang, Jue</creatorcontrib><creatorcontrib>Wang, Guangfu</creatorcontrib><creatorcontrib>Liu, Guanghui</creatorcontrib><creatorcontrib>Zeng, Bing</creatorcontrib><title>SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description><![CDATA[The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain <inline-formula> <tex-math notation="LaTeX">{X} </tex-math></inline-formula> to target domain <inline-formula> <tex-math notation="LaTeX">{Y} </tex-math></inline-formula> in the absence of paired examples. However, such a translation cannot guarantee to generate high perceptual quality results. Existing style transfer methods work well with relatively uniform content, they often fail to capture geometric or structural patterns that always belong to salient regions. Detail losses in structured regions and undesired artifacts in smooth regions are unavoidable even if each individual region is correctly transferred into the target style. In this paper, we propose SDP-GAN, a GAN-based network for solving such problems while generating enjoyable style transfer results. We introduce a saliency network, which is trained with the generator simultaneously. The saliency network has two functions: (1) providing constraints for content loss to increase punishment for salient regions, and (2) supplying saliency features to generator to produce coherent results. Moreover, two novel losses are proposed to optimize the generator and saliency networks. The proposed method preserves the details on important salient regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against several leading prior methods demonstrates the superiority of our method.]]></description><subject>detail preservation</subject><subject>Domains</subject><subject>Gallium nitride</subject><subject>Generative adversarial network</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Image quality</subject><subject>Linear programming</subject><subject>Salience</subject><subject>Saliency detection</subject><subject>Streaming media</subject><subject>style transfer</subject><subject>Task analysis</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkd9LwzAQx4Mobk7fBV8CvvjSmTRp2vo2Nt0GY1Y2n0vWXTSza2vSTvrfm7Lhg3C_4D533PFF6JaSIaUkflzPk6FPfDJkhIkw4GeoT2NOPUK4f-5qEoReSHncQ1fW7gihPKDiEvUYo5GglPZRtZok3nS0fMIrmWsoshZPoJY6x4kBC-Yga10WeAoFGFceAI-2BzBWGi1zvIT6pzRfFqvS4Jn--MQJmAyqunHNNxd03eJV3eaA10YWVoG5RhdK5hZuTnmA3l-e1-OZt3idzsejhZcxn9ceSAERMCVZJoWg2437TmxCFQgZSBYqThknzhQoKdhW8YxEziHoPhSuNUAPx72VKb8bsHW61zaDPJcFlI1NfS5IKPw48h16_w_dlY0p3HUdFXDOYtJR5EhlprTWgEoro_fStCklaadG6tRIOzXSkxpu5O44ogHgD4_9wBllvwzIg4c</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Li, Ru</creator><creator>Wu, Chi-Hao</creator><creator>Liu, Shuaicheng</creator><creator>Wang, Jue</creator><creator>Wang, Guangfu</creator><creator>Liu, Guanghui</creator><creator>Zeng, Bing</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-4170-4552</orcidid><orcidid>https://orcid.org/0000-0002-7249-6848</orcidid><orcidid>https://orcid.org/0000-0002-3641-3136</orcidid><orcidid>https://orcid.org/0000-0002-3704-9065</orcidid><orcidid>https://orcid.org/0000-0002-4491-7967</orcidid><orcidid>https://orcid.org/0000-0002-3085-7422</orcidid><orcidid>https://orcid.org/0000-0002-8815-5335</orcidid></search><sort><creationdate>2021</creationdate><title>SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer</title><author>Li, Ru ; Wu, Chi-Hao ; Liu, Shuaicheng ; Wang, Jue ; Wang, Guangfu ; Liu, Guanghui ; Zeng, Bing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-ea6e8e3fa3ca661db7546b7f56a5a37f41340340fefa63df4c084c0e545166403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>detail preservation</topic><topic>Domains</topic><topic>Gallium nitride</topic><topic>Generative adversarial network</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Image quality</topic><topic>Linear programming</topic><topic>Salience</topic><topic>Saliency detection</topic><topic>Streaming media</topic><topic>style transfer</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Ru</creatorcontrib><creatorcontrib>Wu, Chi-Hao</creatorcontrib><creatorcontrib>Liu, Shuaicheng</creatorcontrib><creatorcontrib>Wang, Jue</creatorcontrib><creatorcontrib>Wang, Guangfu</creatorcontrib><creatorcontrib>Liu, Guanghui</creatorcontrib><creatorcontrib>Zeng, Bing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Ru</au><au>Wu, Chi-Hao</au><au>Liu, Shuaicheng</au><au>Wang, Jue</au><au>Wang, Guangfu</au><au>Liu, Guanghui</au><au>Zeng, Bing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2021</date><risdate>2021</risdate><volume>30</volume><spage>374</spage><epage>385</epage><pages>374-385</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract><![CDATA[The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain <inline-formula> <tex-math notation="LaTeX">{X} </tex-math></inline-formula> to target domain <inline-formula> <tex-math notation="LaTeX">{Y} </tex-math></inline-formula> in the absence of paired examples. However, such a translation cannot guarantee to generate high perceptual quality results. Existing style transfer methods work well with relatively uniform content, they often fail to capture geometric or structural patterns that always belong to salient regions. Detail losses in structured regions and undesired artifacts in smooth regions are unavoidable even if each individual region is correctly transferred into the target style. In this paper, we propose SDP-GAN, a GAN-based network for solving such problems while generating enjoyable style transfer results. We introduce a saliency network, which is trained with the generator simultaneously. The saliency network has two functions: (1) providing constraints for content loss to increase punishment for salient regions, and (2) supplying saliency features to generator to produce coherent results. Moreover, two novel losses are proposed to optimize the generator and saliency networks. The proposed method preserves the details on important salient regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against several leading prior methods demonstrates the superiority of our method.]]></abstract><cop>New York</cop><pub>IEEE</pub><pmid>33186111</pmid><doi>10.1109/TIP.2020.3036754</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-4170-4552</orcidid><orcidid>https://orcid.org/0000-0002-7249-6848</orcidid><orcidid>https://orcid.org/0000-0002-3641-3136</orcidid><orcidid>https://orcid.org/0000-0002-3704-9065</orcidid><orcidid>https://orcid.org/0000-0002-4491-7967</orcidid><orcidid>https://orcid.org/0000-0002-3085-7422</orcidid><orcidid>https://orcid.org/0000-0002-8815-5335</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2021, Vol.30, p.374-385 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_proquest_journals_2465443902 |
source | IEEE Electronic Library (IEL) |
subjects | detail preservation Domains Gallium nitride Generative adversarial network Generative adversarial networks Generators Image quality Linear programming Salience Saliency detection Streaming media style transfer Task analysis |
title | SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T00%3A51%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SDP-GAN:%20Saliency%20Detail%20Preservation%20Generative%20Adversarial%20Networks%20for%20High%20Perceptual%20Quality%20Style%20Transfer&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Li,%20Ru&rft.date=2021&rft.volume=30&rft.spage=374&rft.epage=385&rft.pages=374-385&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2020.3036754&rft_dat=%3Cproquest_RIE%3E2465443902%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2465443902&rft_id=info:pmid/33186111&rft_ieee_id=9259251&rfr_iscdi=true |