Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning
Face hallucination is a technique that reconstructs high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of the human face to estimate the optimal representation coefficie...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cybernetics 2020-01, Vol.50 (1), p.324-337 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 337 |
---|---|
container_issue | 1 |
container_start_page | 324 |
container_title | IEEE transactions on cybernetics |
container_volume | 50 |
creator | Jiang, Junjun Yu, Yi Tang, Suhua Ma, Jiayi Aizawa, Akiko Aizawa, Kiyoharu |
description | Face hallucination is a technique that reconstructs high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of the human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of the image patch. In addition, when they are confronted with misalignment or the small sample size (SSS) problem, the hallucination performance is very poor. To this end, this paper incorporates the contextual information of the image patch and proposes a powerful and efficient context-patch-based face hallucination approach, namely, thresholding locality-constrained representation and reproducing learning (TLcR-RL). Under the context-patch-based framework, we advance a thresholding-based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulate the case that the HR version of the input LR face is present in the training set, it thus iteratively enhances the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. In addition, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL. |
doi_str_mv | 10.1109/TCYB.2018.2868891 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2308297868</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8493598</ieee_id><sourcerecordid>2122587718</sourcerecordid><originalsourceid>FETCH-LOGICAL-c393t-d021f4839aa6a23431ae85267699f13bf9b11de83c26ab0f5a8296c5a28d06163</originalsourceid><addsrcrecordid>eNqNkU-L1EAQxYMo7rLuBxBBAl6EJWP_STrdRze4rjCgyHjwFCqditNLpntNd9D99lsh4wie7EsXxe89qupl2UvONpwz827XfL_eCMb1RmilteFPsnPBlS6EqKunp1rVZ9lljHeMnqaW0c-zM8mkLDVn51lsgk_4OxVfINl9fgMW81sYx9k6D8kFn19DxD6nYrefMO7D2Dv_I98GC6NLDwXpY5rAeYK-4j0h6NOqBL-2Qr-4kQZh8lS8yJ4NMEa8PP4X2bebD7vmtth-_vipeb8trDQyFT0TfCi1NAAKhCwlB9QVLaSMGbjsBtNx3qOWVijo2FCBFkbZCoTumeJKXmRvV18a4eeMMbUHFy2OI3gMc2wFF6LSdc01oW_-Qe_CPHmarhWSkW9NNyaKr5SdQowTDu395A4wPbSctUso7RJKu4TSHkMhzeuj89wdsD8p_kRAgF6BX9iFIVqH3uIJo9Qqzo0sl_xE3bj1tk2YfSLp1f9LiX610g7xL6VLIyuj5SOM8bD_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2308297868</pqid></control><display><type>article</type><title>Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Jiang, Junjun ; Yu, Yi ; Tang, Suhua ; Ma, Jiayi ; Aizawa, Akiko ; Aizawa, Kiyoharu</creator><creatorcontrib>Jiang, Junjun ; Yu, Yi ; Tang, Suhua ; Ma, Jiayi ; Aizawa, Akiko ; Aizawa, Kiyoharu</creatorcontrib><description>Face hallucination is a technique that reconstructs high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of the human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of the image patch. In addition, when they are confronted with misalignment or the small sample size (SSS) problem, the hallucination performance is very poor. To this end, this paper incorporates the contextual information of the image patch and proposes a powerful and efficient context-patch-based face hallucination approach, namely, thresholding locality-constrained representation and reproducing learning (TLcR-RL). Under the context-patch-based framework, we advance a thresholding-based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulate the case that the HR version of the input LR face is present in the training set, it thus iteratively enhances the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. In addition, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL.</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2018.2868891</identifier><identifier>PMID: 30334810</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>PISCATAWAY: IEEE</publisher><subject>Algorithms ; Automation & Control Systems ; Computer Science ; Computer Science, Artificial Intelligence ; Computer Science, Cybernetics ; Computer simulation ; Context-patch ; Face ; face hallucination ; Hallucinations ; Image reconstruction ; Image resolution ; image super-resolution ; Indexes ; Informatics ; Machine learning ; Mathematical model ; Misalignment ; Performance enhancement ; position-patch ; Representations ; reproducing learning (RL) ; Science & Technology ; Source code ; Technology ; Training</subject><ispartof>IEEE transactions on cybernetics, 2020-01, Vol.50 (1), p.324-337</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>31</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000511934000027</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c393t-d021f4839aa6a23431ae85267699f13bf9b11de83c26ab0f5a8296c5a28d06163</citedby><cites>FETCH-LOGICAL-c393t-d021f4839aa6a23431ae85267699f13bf9b11de83c26ab0f5a8296c5a28d06163</cites><orcidid>0000-0003-3264-3265 ; 0000-0002-0294-6620 ; 0000-0003-2146-6275 ; 0000-0002-5694-505X ; 0000-0002-5784-8411</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8493598$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,28253,28254,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8493598$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30334810$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Junjun</creatorcontrib><creatorcontrib>Yu, Yi</creatorcontrib><creatorcontrib>Tang, Suhua</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><creatorcontrib>Aizawa, Akiko</creatorcontrib><creatorcontrib>Aizawa, Kiyoharu</creatorcontrib><title>Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE T CYBERNETICS</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>Face hallucination is a technique that reconstructs high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of the human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of the image patch. In addition, when they are confronted with misalignment or the small sample size (SSS) problem, the hallucination performance is very poor. To this end, this paper incorporates the contextual information of the image patch and proposes a powerful and efficient context-patch-based face hallucination approach, namely, thresholding locality-constrained representation and reproducing learning (TLcR-RL). Under the context-patch-based framework, we advance a thresholding-based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulate the case that the HR version of the input LR face is present in the training set, it thus iteratively enhances the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. In addition, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL.</description><subject>Algorithms</subject><subject>Automation & Control Systems</subject><subject>Computer Science</subject><subject>Computer Science, Artificial Intelligence</subject><subject>Computer Science, Cybernetics</subject><subject>Computer simulation</subject><subject>Context-patch</subject><subject>Face</subject><subject>face hallucination</subject><subject>Hallucinations</subject><subject>Image reconstruction</subject><subject>Image resolution</subject><subject>image super-resolution</subject><subject>Indexes</subject><subject>Informatics</subject><subject>Machine learning</subject><subject>Mathematical model</subject><subject>Misalignment</subject><subject>Performance enhancement</subject><subject>position-patch</subject><subject>Representations</subject><subject>reproducing learning (RL)</subject><subject>Science & Technology</subject><subject>Source code</subject><subject>Technology</subject><subject>Training</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>AOWDO</sourceid><sourceid>ARHDP</sourceid><recordid>eNqNkU-L1EAQxYMo7rLuBxBBAl6EJWP_STrdRze4rjCgyHjwFCqditNLpntNd9D99lsh4wie7EsXxe89qupl2UvONpwz827XfL_eCMb1RmilteFPsnPBlS6EqKunp1rVZ9lljHeMnqaW0c-zM8mkLDVn51lsgk_4OxVfINl9fgMW81sYx9k6D8kFn19DxD6nYrefMO7D2Dv_I98GC6NLDwXpY5rAeYK-4j0h6NOqBL-2Qr-4kQZh8lS8yJ4NMEa8PP4X2bebD7vmtth-_vipeb8trDQyFT0TfCi1NAAKhCwlB9QVLaSMGbjsBtNx3qOWVijo2FCBFkbZCoTumeJKXmRvV18a4eeMMbUHFy2OI3gMc2wFF6LSdc01oW_-Qe_CPHmarhWSkW9NNyaKr5SdQowTDu395A4wPbSctUso7RJKu4TSHkMhzeuj89wdsD8p_kRAgF6BX9iFIVqH3uIJo9Qqzo0sl_xE3bj1tk2YfSLp1f9LiX610g7xL6VLIyuj5SOM8bD_</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Jiang, Junjun</creator><creator>Yu, Yi</creator><creator>Tang, Suhua</creator><creator>Ma, Jiayi</creator><creator>Aizawa, Akiko</creator><creator>Aizawa, Kiyoharu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>17B</scope><scope>AOWDO</scope><scope>ARHDP</scope><scope>BLEPL</scope><scope>DTL</scope><scope>DVR</scope><scope>EGQ</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3264-3265</orcidid><orcidid>https://orcid.org/0000-0002-0294-6620</orcidid><orcidid>https://orcid.org/0000-0003-2146-6275</orcidid><orcidid>https://orcid.org/0000-0002-5694-505X</orcidid><orcidid>https://orcid.org/0000-0002-5784-8411</orcidid></search><sort><creationdate>20200101</creationdate><title>Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning</title><author>Jiang, Junjun ; Yu, Yi ; Tang, Suhua ; Ma, Jiayi ; Aizawa, Akiko ; Aizawa, Kiyoharu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c393t-d021f4839aa6a23431ae85267699f13bf9b11de83c26ab0f5a8296c5a28d06163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Automation & Control Systems</topic><topic>Computer Science</topic><topic>Computer Science, Artificial Intelligence</topic><topic>Computer Science, Cybernetics</topic><topic>Computer simulation</topic><topic>Context-patch</topic><topic>Face</topic><topic>face hallucination</topic><topic>Hallucinations</topic><topic>Image reconstruction</topic><topic>Image resolution</topic><topic>image super-resolution</topic><topic>Indexes</topic><topic>Informatics</topic><topic>Machine learning</topic><topic>Mathematical model</topic><topic>Misalignment</topic><topic>Performance enhancement</topic><topic>position-patch</topic><topic>Representations</topic><topic>reproducing learning (RL)</topic><topic>Science & Technology</topic><topic>Source code</topic><topic>Technology</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Junjun</creatorcontrib><creatorcontrib>Yu, Yi</creatorcontrib><creatorcontrib>Tang, Suhua</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><creatorcontrib>Aizawa, Akiko</creatorcontrib><creatorcontrib>Aizawa, Kiyoharu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Web of Knowledge</collection><collection>Web of Science - Science Citation Index Expanded - 2020</collection><collection>Web of Science - Social Sciences Citation Index – 2020</collection><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>Social Sciences Citation Index</collection><collection>Web of Science Primary (SCIE, SSCI & AHCI)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiang, Junjun</au><au>Yu, Yi</au><au>Tang, Suhua</au><au>Ma, Jiayi</au><au>Aizawa, Akiko</au><au>Aizawa, Kiyoharu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><stitle>IEEE T CYBERNETICS</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2020-01-01</date><risdate>2020</risdate><volume>50</volume><issue>1</issue><spage>324</spage><epage>337</epage><pages>324-337</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>Face hallucination is a technique that reconstructs high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of the human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of the image patch. In addition, when they are confronted with misalignment or the small sample size (SSS) problem, the hallucination performance is very poor. To this end, this paper incorporates the contextual information of the image patch and proposes a powerful and efficient context-patch-based face hallucination approach, namely, thresholding locality-constrained representation and reproducing learning (TLcR-RL). Under the context-patch-based framework, we advance a thresholding-based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulate the case that the HR version of the input LR face is present in the training set, it thus iteratively enhances the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. In addition, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL.</abstract><cop>PISCATAWAY</cop><pub>IEEE</pub><pmid>30334810</pmid><doi>10.1109/TCYB.2018.2868891</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-3264-3265</orcidid><orcidid>https://orcid.org/0000-0002-0294-6620</orcidid><orcidid>https://orcid.org/0000-0003-2146-6275</orcidid><orcidid>https://orcid.org/0000-0002-5694-505X</orcidid><orcidid>https://orcid.org/0000-0002-5784-8411</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2267 |
ispartof | IEEE transactions on cybernetics, 2020-01, Vol.50 (1), p.324-337 |
issn | 2168-2267 2168-2275 |
language | eng |
recordid | cdi_proquest_journals_2308297868 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Automation & Control Systems Computer Science Computer Science, Artificial Intelligence Computer Science, Cybernetics Computer simulation Context-patch Face face hallucination Hallucinations Image reconstruction Image resolution image super-resolution Indexes Informatics Machine learning Mathematical model Misalignment Performance enhancement position-patch Representations reproducing learning (RL) Science & Technology Source code Technology Training |
title | Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T06%3A29%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Context-Patch%20Face%20Hallucination%20Based%20on%20Thresholding%20Locality-Constrained%20Representation%20and%20Reproducing%20Learning&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Jiang,%20Junjun&rft.date=2020-01-01&rft.volume=50&rft.issue=1&rft.spage=324&rft.epage=337&rft.pages=324-337&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2018.2868891&rft_dat=%3Cproquest_RIE%3E2122587718%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2308297868&rft_id=info:pmid/30334810&rft_ieee_id=8493598&rfr_iscdi=true |