Robust RGB-D Face Recognition Using Attribute-Aware Loss

Existing convolutional neural network (CNN) based face recognition algorithms typically learn a discriminative feature mapping, using a loss function that enforces separation of features from different classes and/or aggregation of features within the same class. However, they may suffer from bias i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2020-10, Vol.42 (10), p.2552-2566
Hauptverfasser: Jiang, Luo, Zhang, Juyong, Deng, Bailin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2566
container_issue 10
container_start_page 2552
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 42
creator Jiang, Luo
Zhang, Juyong
Deng, Bailin
description Existing convolutional neural network (CNN) based face recognition algorithms typically learn a discriminative feature mapping, using a loss function that enforces separation of features from different classes and/or aggregation of features within the same class. However, they may suffer from bias in the training data such as uneven sampling density, because they optimize the adjacency relationship of the learned features without considering the proximity of the underlying faces. Moreover, since they only use facial images for training, the learned feature mapping may not correctly indicate the relationship of other attributes such as gender and ethnicity, which can be important for some face recognition applications. In this paper, we propose a new CNN-based face recognition approach that incorporates such attributes into the training process. Using an attribute-aware loss function that regularizes the feature mapping using attribute proximity, our approach learns more discriminative features that are correlated with the attributes. We train our face recognition model on a large-scale RGB-D data set with over 100K identities captured under real application conditions. By comparing our approach with other methods on a variety of experiments, we demonstrate that depth channel and attribute-aware loss greatly improve the accuracy and robustness of face recognition.
doi_str_mv 10.1109/TPAMI.2019.2919284
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_31144624</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8723091</ieee_id><sourcerecordid>2232477827</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-7ee702eb380825b36eb3e4349cebb706625409c4a4286c71102c5576c590c4fd3</originalsourceid><addsrcrecordid>eNpdkFFPwjAQxxujEUS_gCZmiS--DNtru7aPEwVJMBoCz81WDjICm65bjN_eIsiDT3fJ_e5y_x8h14z2GaPmYfaevo77QJnpg2EGtDghXWa4ibnk5pR0KUsg1hp0h1x4v6aUCUn5OelwxoRIQHSJnlZ565toOnqMn6Jh5jCaoqtWZdEUVRnNfVGuorRp6iJvG4zTr6zGaFJ5f0nOltnG49Wh9sh8-DwbvMSTt9F4kE5ix41sYoWoKGDONdUgc56EFgUXxmGeK5okIAU1TmQCdOJUyAVOSpU4aagTywXvkfv93Y-6-mzRN3ZbeIebTVZi1XoLwEEopUEF9O4fuq7augzfWRCCUaqFloGCPeXqEKPGpf2oi21Wf1tG7c6r_fVqd17twWtYuj2cbvMtLo4rfyIDcLMHCkQ8jrUCTg3jP1rbeIQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2441008485</pqid></control><display><type>article</type><title>Robust RGB-D Face Recognition Using Attribute-Aware Loss</title><source>IEEE Electronic Library (IEL)</source><creator>Jiang, Luo ; Zhang, Juyong ; Deng, Bailin</creator><creatorcontrib>Jiang, Luo ; Zhang, Juyong ; Deng, Bailin</creatorcontrib><description>Existing convolutional neural network (CNN) based face recognition algorithms typically learn a discriminative feature mapping, using a loss function that enforces separation of features from different classes and/or aggregation of features within the same class. However, they may suffer from bias in the training data such as uneven sampling density, because they optimize the adjacency relationship of the learned features without considering the proximity of the underlying faces. Moreover, since they only use facial images for training, the learned feature mapping may not correctly indicate the relationship of other attributes such as gender and ethnicity, which can be important for some face recognition applications. In this paper, we propose a new CNN-based face recognition approach that incorporates such attributes into the training process. Using an attribute-aware loss function that regularizes the feature mapping using attribute proximity, our approach learns more discriminative features that are correlated with the attributes. We train our face recognition model on a large-scale RGB-D data set with over 100K identities captured under real application conditions. By comparing our approach with other methods on a variety of experiments, we demonstrate that depth channel and attribute-aware loss greatly improve the accuracy and robustness of face recognition.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2019.2919284</identifier><identifier>PMID: 31144624</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; attribute-aware loss ; Deep learning ; Face ; Face recognition ; Facial recognition technology ; Feature extraction ; Mapping ; RGB-D images ; Task analysis ; Training ; Training data ; uneven sampling density</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2020-10, Vol.42 (10), p.2552-2566</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-7ee702eb380825b36eb3e4349cebb706625409c4a4286c71102c5576c590c4fd3</citedby><cites>FETCH-LOGICAL-c395t-7ee702eb380825b36eb3e4349cebb706625409c4a4286c71102c5576c590c4fd3</cites><orcidid>0000-0002-7578-8723 ; 0000-0002-0158-7670 ; 0000-0002-1805-1426</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8723091$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8723091$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31144624$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Luo</creatorcontrib><creatorcontrib>Zhang, Juyong</creatorcontrib><creatorcontrib>Deng, Bailin</creatorcontrib><title>Robust RGB-D Face Recognition Using Attribute-Aware Loss</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Existing convolutional neural network (CNN) based face recognition algorithms typically learn a discriminative feature mapping, using a loss function that enforces separation of features from different classes and/or aggregation of features within the same class. However, they may suffer from bias in the training data such as uneven sampling density, because they optimize the adjacency relationship of the learned features without considering the proximity of the underlying faces. Moreover, since they only use facial images for training, the learned feature mapping may not correctly indicate the relationship of other attributes such as gender and ethnicity, which can be important for some face recognition applications. In this paper, we propose a new CNN-based face recognition approach that incorporates such attributes into the training process. Using an attribute-aware loss function that regularizes the feature mapping using attribute proximity, our approach learns more discriminative features that are correlated with the attributes. We train our face recognition model on a large-scale RGB-D data set with over 100K identities captured under real application conditions. By comparing our approach with other methods on a variety of experiments, we demonstrate that depth channel and attribute-aware loss greatly improve the accuracy and robustness of face recognition.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>attribute-aware loss</subject><subject>Deep learning</subject><subject>Face</subject><subject>Face recognition</subject><subject>Facial recognition technology</subject><subject>Feature extraction</subject><subject>Mapping</subject><subject>RGB-D images</subject><subject>Task analysis</subject><subject>Training</subject><subject>Training data</subject><subject>uneven sampling density</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkFFPwjAQxxujEUS_gCZmiS--DNtru7aPEwVJMBoCz81WDjICm65bjN_eIsiDT3fJ_e5y_x8h14z2GaPmYfaevo77QJnpg2EGtDghXWa4ibnk5pR0KUsg1hp0h1x4v6aUCUn5OelwxoRIQHSJnlZ565toOnqMn6Jh5jCaoqtWZdEUVRnNfVGuorRp6iJvG4zTr6zGaFJ5f0nOltnG49Wh9sh8-DwbvMSTt9F4kE5ix41sYoWoKGDONdUgc56EFgUXxmGeK5okIAU1TmQCdOJUyAVOSpU4aagTywXvkfv93Y-6-mzRN3ZbeIebTVZi1XoLwEEopUEF9O4fuq7augzfWRCCUaqFloGCPeXqEKPGpf2oi21Wf1tG7c6r_fVqd17twWtYuj2cbvMtLo4rfyIDcLMHCkQ8jrUCTg3jP1rbeIQ</recordid><startdate>20201001</startdate><enddate>20201001</enddate><creator>Jiang, Luo</creator><creator>Zhang, Juyong</creator><creator>Deng, Bailin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7578-8723</orcidid><orcidid>https://orcid.org/0000-0002-0158-7670</orcidid><orcidid>https://orcid.org/0000-0002-1805-1426</orcidid></search><sort><creationdate>20201001</creationdate><title>Robust RGB-D Face Recognition Using Attribute-Aware Loss</title><author>Jiang, Luo ; Zhang, Juyong ; Deng, Bailin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-7ee702eb380825b36eb3e4349cebb706625409c4a4286c71102c5576c590c4fd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>attribute-aware loss</topic><topic>Deep learning</topic><topic>Face</topic><topic>Face recognition</topic><topic>Facial recognition technology</topic><topic>Feature extraction</topic><topic>Mapping</topic><topic>RGB-D images</topic><topic>Task analysis</topic><topic>Training</topic><topic>Training data</topic><topic>uneven sampling density</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Luo</creatorcontrib><creatorcontrib>Zhang, Juyong</creatorcontrib><creatorcontrib>Deng, Bailin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiang, Luo</au><au>Zhang, Juyong</au><au>Deng, Bailin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust RGB-D Face Recognition Using Attribute-Aware Loss</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2020-10-01</date><risdate>2020</risdate><volume>42</volume><issue>10</issue><spage>2552</spage><epage>2566</epage><pages>2552-2566</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Existing convolutional neural network (CNN) based face recognition algorithms typically learn a discriminative feature mapping, using a loss function that enforces separation of features from different classes and/or aggregation of features within the same class. However, they may suffer from bias in the training data such as uneven sampling density, because they optimize the adjacency relationship of the learned features without considering the proximity of the underlying faces. Moreover, since they only use facial images for training, the learned feature mapping may not correctly indicate the relationship of other attributes such as gender and ethnicity, which can be important for some face recognition applications. In this paper, we propose a new CNN-based face recognition approach that incorporates such attributes into the training process. Using an attribute-aware loss function that regularizes the feature mapping using attribute proximity, our approach learns more discriminative features that are correlated with the attributes. We train our face recognition model on a large-scale RGB-D data set with over 100K identities captured under real application conditions. By comparing our approach with other methods on a variety of experiments, we demonstrate that depth channel and attribute-aware loss greatly improve the accuracy and robustness of face recognition.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>31144624</pmid><doi>10.1109/TPAMI.2019.2919284</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-7578-8723</orcidid><orcidid>https://orcid.org/0000-0002-0158-7670</orcidid><orcidid>https://orcid.org/0000-0002-1805-1426</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2020-10, Vol.42 (10), p.2552-2566
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_pubmed_primary_31144624
source IEEE Electronic Library (IEL)
subjects Algorithms
Artificial neural networks
attribute-aware loss
Deep learning
Face
Face recognition
Facial recognition technology
Feature extraction
Mapping
RGB-D images
Task analysis
Training
Training data
uneven sampling density
title Robust RGB-D Face Recognition Using Attribute-Aware Loss
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T18%3A12%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20RGB-D%20Face%20Recognition%20Using%20Attribute-Aware%20Loss&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Jiang,%20Luo&rft.date=2020-10-01&rft.volume=42&rft.issue=10&rft.spage=2552&rft.epage=2566&rft.pages=2552-2566&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2019.2919284&rft_dat=%3Cproquest_RIE%3E2232477827%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2441008485&rft_id=info:pmid/31144624&rft_ieee_id=8723091&rfr_iscdi=true