Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition

This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the recons...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2015-02, Vol.111 (3), p.365-383
Hauptverfasser: Gao, Shenghua, Jia, Kui, Zhuang, Liansheng, Ma, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 383
container_issue 3
container_start_page 365
container_title International journal of computer vision
container_volume 111
creator Gao, Shenghua
Jia, Kui
Zhuang, Liansheng
Ma, Yi
description This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.
doi_str_mv 10.1007/s11263-014-0750-4
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1669909147</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A407905121</galeid><sourcerecordid>A407905121</sourcerecordid><originalsourceid>FETCH-LOGICAL-c558t-afc898c4beb41a6003c025cc4097b3072de3f3b4ff8e9728ce68897d6a1c20063</originalsourceid><addsrcrecordid>eNp1kVFrFDEQxxdR8Kz9AL4t-KIPWyfZZJP4VouthaMtd_ocsrnJNmUvOZNdUD-9OdYHWyghzJD8fsPAv6reETgjAOJTJoR2bQOENSA4NOxFtSJctA1hwF9WK1AUGt4p8rp6k_MDAFBJ21UVbtBP95jqqzH2ZqxvYqrX0Zrxc73BYR5N8n9wV9-Zyd43X0wu_QYPCTOGyUw-htoVY-vDMGK9NftDKXdlXLm5fF4ai0WwcQj-SL-tXjkzZjz9V0-qH5dfv198a9a3V9cX5-vGci6nxjgrlbSsx54R0wG0Fii3loESfQuC7rB1bc-ck6gElRY7KZXYdYZYCtC1J9WHZe4hxZ8z5knvfbY4jiZgnLMmXacUKMJEQd8_QR_inELZrlCcs06CYIU6W6jBjKh9cHFKxpazw723MaDz5f2cgVDACSVF-PhIKMyEv6bBzDnr6-3mMUsW1qaYc0KnD8nvTfqtCehjvHqJV5d49TFefVyILk4ubBgw_bf2s9JfDCilkQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1655468074</pqid></control><display><type>article</type><title>Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition</title><source>Springer Nature - Complete Springer Journals</source><creator>Gao, Shenghua ; Jia, Kui ; Zhuang, Liansheng ; Ma, Yi</creator><creatorcontrib>Gao, Shenghua ; Jia, Kui ; Zhuang, Liansheng ; Ma, Yi</creatorcontrib><description>This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-014-0750-4</identifier><language>eng</language><publisher>Boston: Springer US</publisher><subject>Analysis ; Artificial Intelligence ; Biometry ; Collaboration ; Computer Imaging ; Computer Science ; Dictionaries ; Face ; Face recognition ; Facial recognition technology ; Galleries ; Image Processing and Computer Vision ; Image processing systems ; Image reconstruction ; Methods ; Pattern Recognition ; Pattern Recognition and Graphics ; Principal components analysis ; Reconstruction ; Regularization methods ; Representations ; Sparsity ; Studies ; Variance ; Vision</subject><ispartof>International journal of computer vision, 2015-02, Vol.111 (3), p.365-383</ispartof><rights>Springer Science+Business Media New York 2014</rights><rights>COPYRIGHT 2015 Springer</rights><rights>Springer Science+Business Media New York 2015</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c558t-afc898c4beb41a6003c025cc4097b3072de3f3b4ff8e9728ce68897d6a1c20063</citedby><cites>FETCH-LOGICAL-c558t-afc898c4beb41a6003c025cc4097b3072de3f3b4ff8e9728ce68897d6a1c20063</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-014-0750-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-014-0750-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Gao, Shenghua</creatorcontrib><creatorcontrib>Jia, Kui</creatorcontrib><creatorcontrib>Zhuang, Liansheng</creatorcontrib><creatorcontrib>Ma, Yi</creatorcontrib><title>Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.</description><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Biometry</subject><subject>Collaboration</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Dictionaries</subject><subject>Face</subject><subject>Face recognition</subject><subject>Facial recognition technology</subject><subject>Galleries</subject><subject>Image Processing and Computer Vision</subject><subject>Image processing systems</subject><subject>Image reconstruction</subject><subject>Methods</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Principal components analysis</subject><subject>Reconstruction</subject><subject>Regularization methods</subject><subject>Representations</subject><subject>Sparsity</subject><subject>Studies</subject><subject>Variance</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kVFrFDEQxxdR8Kz9AL4t-KIPWyfZZJP4VouthaMtd_ocsrnJNmUvOZNdUD-9OdYHWyghzJD8fsPAv6reETgjAOJTJoR2bQOENSA4NOxFtSJctA1hwF9WK1AUGt4p8rp6k_MDAFBJ21UVbtBP95jqqzH2ZqxvYqrX0Zrxc73BYR5N8n9wV9-Zyd43X0wu_QYPCTOGyUw-htoVY-vDMGK9NftDKXdlXLm5fF4ai0WwcQj-SL-tXjkzZjz9V0-qH5dfv198a9a3V9cX5-vGci6nxjgrlbSsx54R0wG0Fii3loESfQuC7rB1bc-ck6gElRY7KZXYdYZYCtC1J9WHZe4hxZ8z5knvfbY4jiZgnLMmXacUKMJEQd8_QR_inELZrlCcs06CYIU6W6jBjKh9cHFKxpazw723MaDz5f2cgVDACSVF-PhIKMyEv6bBzDnr6-3mMUsW1qaYc0KnD8nvTfqtCehjvHqJV5d49TFefVyILk4ubBgw_bf2s9JfDCilkQ</recordid><startdate>20150201</startdate><enddate>20150201</enddate><creator>Gao, Shenghua</creator><creator>Jia, Kui</creator><creator>Zhuang, Liansheng</creator><creator>Ma, Yi</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20150201</creationdate><title>Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition</title><author>Gao, Shenghua ; Jia, Kui ; Zhuang, Liansheng ; Ma, Yi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c558t-afc898c4beb41a6003c025cc4097b3072de3f3b4ff8e9728ce68897d6a1c20063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Biometry</topic><topic>Collaboration</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Dictionaries</topic><topic>Face</topic><topic>Face recognition</topic><topic>Facial recognition technology</topic><topic>Galleries</topic><topic>Image Processing and Computer Vision</topic><topic>Image processing systems</topic><topic>Image reconstruction</topic><topic>Methods</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Principal components analysis</topic><topic>Reconstruction</topic><topic>Regularization methods</topic><topic>Representations</topic><topic>Sparsity</topic><topic>Studies</topic><topic>Variance</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Shenghua</creatorcontrib><creatorcontrib>Jia, Kui</creatorcontrib><creatorcontrib>Zhuang, Liansheng</creatorcontrib><creatorcontrib>Ma, Yi</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Shenghua</au><au>Jia, Kui</au><au>Zhuang, Liansheng</au><au>Ma, Yi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2015-02-01</date><risdate>2015</risdate><volume>111</volume><issue>3</issue><spage>365</spage><epage>383</epage><pages>365-383</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.</abstract><cop>Boston</cop><pub>Springer US</pub><doi>10.1007/s11263-014-0750-4</doi><tpages>19</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2015-02, Vol.111 (3), p.365-383
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_miscellaneous_1669909147
source Springer Nature - Complete Springer Journals
subjects Analysis
Artificial Intelligence
Biometry
Collaboration
Computer Imaging
Computer Science
Dictionaries
Face
Face recognition
Facial recognition technology
Galleries
Image Processing and Computer Vision
Image processing systems
Image reconstruction
Methods
Pattern Recognition
Pattern Recognition and Graphics
Principal components analysis
Reconstruction
Regularization methods
Representations
Sparsity
Studies
Variance
Vision
title Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T03%3A33%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neither%20Global%20Nor%20Local:%20Regularized%20Patch-Based%20Representation%20for%20Single%20Sample%20Per%20Person%20Face%20Recognition&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Gao,%20Shenghua&rft.date=2015-02-01&rft.volume=111&rft.issue=3&rft.spage=365&rft.epage=383&rft.pages=365-383&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-014-0750-4&rft_dat=%3Cgale_proqu%3EA407905121%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1655468074&rft_id=info:pmid/&rft_galeid=A407905121&rfr_iscdi=true