Bridging partial-gated convolution with transformer for smooth-variation image inpainting
Deep learning has brought essential improvement to image inpainting technology. Conventional deep-learning methods primarily focus on creating visually appealing content in the missing parts of images. However, these methods usually generate edge variations and blurry structures in the filled images...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2024-02, Vol.83 (32), p.78387-78406 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 78406 |
---|---|
container_issue | 32 |
container_start_page | 78387 |
container_title | Multimedia tools and applications |
container_volume | 83 |
creator | Wang, Zeyu Shen, Haibin Huang, Kejie |
description | Deep learning has brought essential improvement to image inpainting technology. Conventional deep-learning methods primarily focus on creating visually appealing content in the missing parts of images. However, these methods usually generate edge variations and blurry structures in the filled images, which lead to imbalances in quantitative metrics PSNR/SSIM and LPIPS/FID. In this work, we introduce a pioneering model called PTG-Fill, which utilizes a coarse-to-fine architecture to achieve smooth-variation image inpainting. Our approach adopts the novel Stable-Partial Convolution to construct the coarse network, which integrates a smooth mask-update process to ensure its long-term operation. Meanwhile, we propose the novel Distinctive-Gated Convolution to construct the refined network, which diminishes pixel-level variations by the distinctive attention. Additionally, we build up a novel Transformer bridger to preserve the in-depth features for image refinement and facilitate the operation of the two-stage network. Our extensive experiments demonstrate that PTG-Fill outperforms previous state-of-the-art methods both quantitatively and qualitatively under various mask ratios on four benchmark datasets: CelebA-HQ, FFHQ, Paris StreetView, and Places2. Code and pre-trained weights are available at
https://github.com/zeyuwang-zju/PTG-Fill
. |
doi_str_mv | 10.1007/s11042-024-18590-5 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3100674439</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3100674439</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-c70d25c45e9d9c0add112b426b404986a15161d7ec0ecab660731d985cae8a53</originalsourceid><addsrcrecordid>eNp9kE1PAyEQhonRxFr9A5428YwyLCzdozZ-JU289OKJUKBbmhZWoDX-e2nXRE-eZg7v-8zkQegayC0QIu4SAGEUE8owTHhLMD9BI-CixkJQOP2zn6OLlNaEQMMpG6H3h-hM53xX9Spmpza4U9maSge_D5tddsFXny6vqhyVT8sQtzZWZVRpG0Je4b2KTh1Tbqs6WznfK-dzAV6is6XaJHv1M8do_vQ4n77g2dvz6_R-hjUVJGMtiKFcM25b02qijAGgC0abBSOsnTQKODRghNXEarVoGiJqMO2Ea2UnitdjdDNg-xg-djZluQ676MtFWRc1jWCsbkuKDikdQ0rRLmUfy8PxSwKRB4NyMCiLQXk0KA_oeiilEvadjb_of1rfvzx1JQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3100674439</pqid></control><display><type>article</type><title>Bridging partial-gated convolution with transformer for smooth-variation image inpainting</title><source>SpringerNature Journals</source><creator>Wang, Zeyu ; Shen, Haibin ; Huang, Kejie</creator><creatorcontrib>Wang, Zeyu ; Shen, Haibin ; Huang, Kejie</creatorcontrib><description>Deep learning has brought essential improvement to image inpainting technology. Conventional deep-learning methods primarily focus on creating visually appealing content in the missing parts of images. However, these methods usually generate edge variations and blurry structures in the filled images, which lead to imbalances in quantitative metrics PSNR/SSIM and LPIPS/FID. In this work, we introduce a pioneering model called PTG-Fill, which utilizes a coarse-to-fine architecture to achieve smooth-variation image inpainting. Our approach adopts the novel Stable-Partial Convolution to construct the coarse network, which integrates a smooth mask-update process to ensure its long-term operation. Meanwhile, we propose the novel Distinctive-Gated Convolution to construct the refined network, which diminishes pixel-level variations by the distinctive attention. Additionally, we build up a novel Transformer bridger to preserve the in-depth features for image refinement and facilitate the operation of the two-stage network. Our extensive experiments demonstrate that PTG-Fill outperforms previous state-of-the-art methods both quantitatively and qualitatively under various mask ratios on four benchmark datasets: CelebA-HQ, FFHQ, Paris StreetView, and Places2. Code and pre-trained weights are available at
https://github.com/zeyuwang-zju/PTG-Fill
.</description><identifier>ISSN: 1573-7721</identifier><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-024-18590-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Computer Communication Networks ; Computer Science ; Convolution ; Data Structures and Information Theory ; Datasets ; Deep learning ; Multimedia ; Multimedia Information Systems ; Special Purpose and Application-Based Systems ; Track 6: Computer Vision for Multimedia Applications ; Transformers</subject><ispartof>Multimedia tools and applications, 2024-02, Vol.83 (32), p.78387-78406</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-c70d25c45e9d9c0add112b426b404986a15161d7ec0ecab660731d985cae8a53</cites><orcidid>0000-0003-3722-9979</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-024-18590-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-024-18590-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27928,27929,41492,42561,51323</link.rule.ids></links><search><creatorcontrib>Wang, Zeyu</creatorcontrib><creatorcontrib>Shen, Haibin</creatorcontrib><creatorcontrib>Huang, Kejie</creatorcontrib><title>Bridging partial-gated convolution with transformer for smooth-variation image inpainting</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Deep learning has brought essential improvement to image inpainting technology. Conventional deep-learning methods primarily focus on creating visually appealing content in the missing parts of images. However, these methods usually generate edge variations and blurry structures in the filled images, which lead to imbalances in quantitative metrics PSNR/SSIM and LPIPS/FID. In this work, we introduce a pioneering model called PTG-Fill, which utilizes a coarse-to-fine architecture to achieve smooth-variation image inpainting. Our approach adopts the novel Stable-Partial Convolution to construct the coarse network, which integrates a smooth mask-update process to ensure its long-term operation. Meanwhile, we propose the novel Distinctive-Gated Convolution to construct the refined network, which diminishes pixel-level variations by the distinctive attention. Additionally, we build up a novel Transformer bridger to preserve the in-depth features for image refinement and facilitate the operation of the two-stage network. Our extensive experiments demonstrate that PTG-Fill outperforms previous state-of-the-art methods both quantitatively and qualitatively under various mask ratios on four benchmark datasets: CelebA-HQ, FFHQ, Paris StreetView, and Places2. Code and pre-trained weights are available at
https://github.com/zeyuwang-zju/PTG-Fill
.</description><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Convolution</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Track 6: Computer Vision for Multimedia Applications</subject><subject>Transformers</subject><issn>1573-7721</issn><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE1PAyEQhonRxFr9A5428YwyLCzdozZ-JU289OKJUKBbmhZWoDX-e2nXRE-eZg7v-8zkQegayC0QIu4SAGEUE8owTHhLMD9BI-CixkJQOP2zn6OLlNaEQMMpG6H3h-hM53xX9Spmpza4U9maSge_D5tddsFXny6vqhyVT8sQtzZWZVRpG0Je4b2KTh1Tbqs6WznfK-dzAV6is6XaJHv1M8do_vQ4n77g2dvz6_R-hjUVJGMtiKFcM25b02qijAGgC0abBSOsnTQKODRghNXEarVoGiJqMO2Ea2UnitdjdDNg-xg-djZluQ676MtFWRc1jWCsbkuKDikdQ0rRLmUfy8PxSwKRB4NyMCiLQXk0KA_oeiilEvadjb_of1rfvzx1JQ</recordid><startdate>20240223</startdate><enddate>20240223</enddate><creator>Wang, Zeyu</creator><creator>Shen, Haibin</creator><creator>Huang, Kejie</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-3722-9979</orcidid></search><sort><creationdate>20240223</creationdate><title>Bridging partial-gated convolution with transformer for smooth-variation image inpainting</title><author>Wang, Zeyu ; Shen, Haibin ; Huang, Kejie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-c70d25c45e9d9c0add112b426b404986a15161d7ec0ecab660731d985cae8a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Convolution</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Track 6: Computer Vision for Multimedia Applications</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zeyu</creatorcontrib><creatorcontrib>Shen, Haibin</creatorcontrib><creatorcontrib>Huang, Kejie</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Zeyu</au><au>Shen, Haibin</au><au>Huang, Kejie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bridging partial-gated convolution with transformer for smooth-variation image inpainting</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2024-02-23</date><risdate>2024</risdate><volume>83</volume><issue>32</issue><spage>78387</spage><epage>78406</epage><pages>78387-78406</pages><issn>1573-7721</issn><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Deep learning has brought essential improvement to image inpainting technology. Conventional deep-learning methods primarily focus on creating visually appealing content in the missing parts of images. However, these methods usually generate edge variations and blurry structures in the filled images, which lead to imbalances in quantitative metrics PSNR/SSIM and LPIPS/FID. In this work, we introduce a pioneering model called PTG-Fill, which utilizes a coarse-to-fine architecture to achieve smooth-variation image inpainting. Our approach adopts the novel Stable-Partial Convolution to construct the coarse network, which integrates a smooth mask-update process to ensure its long-term operation. Meanwhile, we propose the novel Distinctive-Gated Convolution to construct the refined network, which diminishes pixel-level variations by the distinctive attention. Additionally, we build up a novel Transformer bridger to preserve the in-depth features for image refinement and facilitate the operation of the two-stage network. Our extensive experiments demonstrate that PTG-Fill outperforms previous state-of-the-art methods both quantitatively and qualitatively under various mask ratios on four benchmark datasets: CelebA-HQ, FFHQ, Paris StreetView, and Places2. Code and pre-trained weights are available at
https://github.com/zeyuwang-zju/PTG-Fill
.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-024-18590-5</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-3722-9979</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1573-7721 |
ispartof | Multimedia tools and applications, 2024-02, Vol.83 (32), p.78387-78406 |
issn | 1573-7721 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_3100674439 |
source | SpringerNature Journals |
subjects | Computer Communication Networks Computer Science Convolution Data Structures and Information Theory Datasets Deep learning Multimedia Multimedia Information Systems Special Purpose and Application-Based Systems Track 6: Computer Vision for Multimedia Applications Transformers |
title | Bridging partial-gated convolution with transformer for smooth-variation image inpainting |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T10%3A21%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bridging%20partial-gated%20convolution%20with%20transformer%20for%20smooth-variation%20image%20inpainting&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Wang,%20Zeyu&rft.date=2024-02-23&rft.volume=83&rft.issue=32&rft.spage=78387&rft.epage=78406&rft.pages=78387-78406&rft.issn=1573-7721&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-024-18590-5&rft_dat=%3Cproquest_cross%3E3100674439%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3100674439&rft_id=info:pmid/&rfr_iscdi=true |