Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight
Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, th...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.67456-67468 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 67468 |
---|---|
container_issue | |
container_start_page | 67456 |
container_title | IEEE access |
container_volume | 7 |
creator | Wei, Jinsheng Lu, Guanming Liu, Huaming Yan, Jingjie |
description | Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, the deep generative model-based methods have been able to implement semantic image inpainting although inpainting results are blurry or distorted. In this paper, through analyzing the advantages and disadvantages of the two methods, we propose a novel and efficient method that combines these two methods by a series connection, which searches for the most reasonable similar patch using the coarse image generated by the deep generative model. When training model, adding Laplace loss to standard loss accelerates model convergence. In addition, we define region weight (RW) when searching for similar patches, which makes edge connection more natural. Our method addresses the problem of blurred results in the deep generative model and dissatisfactory semantic information in the traditional methods. Our experiments, which used the CelebA dataset, demonstrate that our method can achieve realistic and natural facial inpainting results. |
doi_str_mv | 10.1109/ACCESS.2019.2919169 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8723111</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8723111</ieee_id><doaj_id>oai_doaj_org_article_11095aad0b6548dcb2335eea001884f9</doaj_id><sourcerecordid>2455636569</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3239-1099e77b1a6d12089123f6568581f14bd8d8716f2d99f9fe5dc9bc7b54aa64de3</originalsourceid><addsrcrecordid>eNpNUU1LxDAQLaKgqL_AS8DzrpmkSZOjrF8LiuIqewzTZtrNsrZrWgX_vVkr4lzeMMx785iXZWfApwDcXlzOZteLxVRwsFNhwYK2e9mRSDCRSur9f_1hdtr3a57KpJEqjrLlDVYBN2z-hg2xebvF0A6hbdgyDCt2RbRlt9RSxCF8EnvoPG0Ytp494VCt2IIwJnjtd4xnakLXsiWFZjWcZAc1bno6_cXj7PXm-mV2N7l_vJ3PLu8nlRTSTpJ_S0VRAmoPghsLQtZaaaMM1JCX3nhTgK6Ft7a2NSlf2bIqSpUj6tyTPM7mo67vcO22Mbxh_HIdBvcz6GLjMA6h2pDbPUshel5qlRtflUJKRYScgzF5bZPW-ai1jd37B_WDW3cfsU32nciV0jIZ223JcauKXd9Hqv-uAv-54cZA3C4Q9xtIYp2NrEBEfwxTCAkA8htZ5oSk</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455636569</pqid></control><display><type>article</type><title>Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Wei, Jinsheng ; Lu, Guanming ; Liu, Huaming ; Yan, Jingjie</creator><creatorcontrib>Wei, Jinsheng ; Lu, Guanming ; Liu, Huaming ; Yan, Jingjie</creatorcontrib><description>Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, the deep generative model-based methods have been able to implement semantic image inpainting although inpainting results are blurry or distorted. In this paper, through analyzing the advantages and disadvantages of the two methods, we propose a novel and efficient method that combines these two methods by a series connection, which searches for the most reasonable similar patch using the coarse image generated by the deep generative model. When training model, adding Laplace loss to standard loss accelerates model convergence. In addition, we define region weight (RW) when searching for similar patches, which makes edge connection more natural. Our method addresses the problem of blurred results in the deep generative model and dissatisfactory semantic information in the traditional methods. Our experiments, which used the CelebA dataset, demonstrate that our method can achieve realistic and natural facial inpainting results.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2919169</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>deep generative model ; Deep learning ; Facial image inpainting ; Generators ; Image edge detection ; region weight ; Searching ; Semantics ; similar patch ; Task analysis ; Telecommunications ; Training ; Weight</subject><ispartof>IEEE access, 2019, Vol.7, p.67456-67468</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3239-1099e77b1a6d12089123f6568581f14bd8d8716f2d99f9fe5dc9bc7b54aa64de3</citedby><cites>FETCH-LOGICAL-c3239-1099e77b1a6d12089123f6568581f14bd8d8716f2d99f9fe5dc9bc7b54aa64de3</cites><orcidid>0000-0002-0268-7745 ; 0000-0002-6112-6307</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8723111$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Wei, Jinsheng</creatorcontrib><creatorcontrib>Lu, Guanming</creatorcontrib><creatorcontrib>Liu, Huaming</creatorcontrib><creatorcontrib>Yan, Jingjie</creatorcontrib><title>Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight</title><title>IEEE access</title><addtitle>Access</addtitle><description>Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, the deep generative model-based methods have been able to implement semantic image inpainting although inpainting results are blurry or distorted. In this paper, through analyzing the advantages and disadvantages of the two methods, we propose a novel and efficient method that combines these two methods by a series connection, which searches for the most reasonable similar patch using the coarse image generated by the deep generative model. When training model, adding Laplace loss to standard loss accelerates model convergence. In addition, we define region weight (RW) when searching for similar patches, which makes edge connection more natural. Our method addresses the problem of blurred results in the deep generative model and dissatisfactory semantic information in the traditional methods. Our experiments, which used the CelebA dataset, demonstrate that our method can achieve realistic and natural facial inpainting results.</description><subject>deep generative model</subject><subject>Deep learning</subject><subject>Facial image inpainting</subject><subject>Generators</subject><subject>Image edge detection</subject><subject>region weight</subject><subject>Searching</subject><subject>Semantics</subject><subject>similar patch</subject><subject>Task analysis</subject><subject>Telecommunications</subject><subject>Training</subject><subject>Weight</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1LxDAQLaKgqL_AS8DzrpmkSZOjrF8LiuIqewzTZtrNsrZrWgX_vVkr4lzeMMx785iXZWfApwDcXlzOZteLxVRwsFNhwYK2e9mRSDCRSur9f_1hdtr3a57KpJEqjrLlDVYBN2z-hg2xebvF0A6hbdgyDCt2RbRlt9RSxCF8EnvoPG0Ytp494VCt2IIwJnjtd4xnakLXsiWFZjWcZAc1bno6_cXj7PXm-mV2N7l_vJ3PLu8nlRTSTpJ_S0VRAmoPghsLQtZaaaMM1JCX3nhTgK6Ft7a2NSlf2bIqSpUj6tyTPM7mo67vcO22Mbxh_HIdBvcz6GLjMA6h2pDbPUshel5qlRtflUJKRYScgzF5bZPW-ai1jd37B_WDW3cfsU32nciV0jIZ223JcauKXd9Hqv-uAv-54cZA3C4Q9xtIYp2NrEBEfwxTCAkA8htZ5oSk</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Wei, Jinsheng</creator><creator>Lu, Guanming</creator><creator>Liu, Huaming</creator><creator>Yan, Jingjie</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-0268-7745</orcidid><orcidid>https://orcid.org/0000-0002-6112-6307</orcidid></search><sort><creationdate>2019</creationdate><title>Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight</title><author>Wei, Jinsheng ; Lu, Guanming ; Liu, Huaming ; Yan, Jingjie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3239-1099e77b1a6d12089123f6568581f14bd8d8716f2d99f9fe5dc9bc7b54aa64de3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>deep generative model</topic><topic>Deep learning</topic><topic>Facial image inpainting</topic><topic>Generators</topic><topic>Image edge detection</topic><topic>region weight</topic><topic>Searching</topic><topic>Semantics</topic><topic>similar patch</topic><topic>Task analysis</topic><topic>Telecommunications</topic><topic>Training</topic><topic>Weight</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Jinsheng</creatorcontrib><creatorcontrib>Lu, Guanming</creatorcontrib><creatorcontrib>Liu, Huaming</creatorcontrib><creatorcontrib>Yan, Jingjie</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Jinsheng</au><au>Lu, Guanming</au><au>Liu, Huaming</au><au>Yan, Jingjie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>67456</spage><epage>67468</epage><pages>67456-67468</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, the deep generative model-based methods have been able to implement semantic image inpainting although inpainting results are blurry or distorted. In this paper, through analyzing the advantages and disadvantages of the two methods, we propose a novel and efficient method that combines these two methods by a series connection, which searches for the most reasonable similar patch using the coarse image generated by the deep generative model. When training model, adding Laplace loss to standard loss accelerates model convergence. In addition, we define region weight (RW) when searching for similar patches, which makes edge connection more natural. Our method addresses the problem of blurred results in the deep generative model and dissatisfactory semantic information in the traditional methods. Our experiments, which used the CelebA dataset, demonstrate that our method can achieve realistic and natural facial inpainting results.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2919169</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-0268-7745</orcidid><orcidid>https://orcid.org/0000-0002-6112-6307</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2019, Vol.7, p.67456-67468 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_8723111 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | deep generative model Deep learning Facial image inpainting Generators Image edge detection region weight Searching Semantics similar patch Task analysis Telecommunications Training Weight |
title | Facial Image Inpainting With Deep Generative Model and Patch Search Using Region Weight |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T00%3A49%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Facial%20Image%20Inpainting%20With%20Deep%20Generative%20Model%20and%20Patch%20Search%20Using%20Region%20Weight&rft.jtitle=IEEE%20access&rft.au=Wei,%20Jinsheng&rft.date=2019&rft.volume=7&rft.spage=67456&rft.epage=67468&rft.pages=67456-67468&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2919169&rft_dat=%3Cproquest_ieee_%3E2455636569%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455636569&rft_id=info:pmid/&rft_ieee_id=8723111&rft_doaj_id=oai_doaj_org_article_11095aad0b6548dcb2335eea001884f9&rfr_iscdi=true |