Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition
In real application scenarios, the face images captured by cameras often incur blur, illumination variation, occlusion, and low-resolution (LR), which leads to a challenging problem for many real-time face recognition systems due to a big distribution difference between the captured degraded images...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.97503-97515 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 97515 |
---|---|
container_issue | |
container_start_page | 97503 |
container_title | IEEE access |
container_volume | 8 |
creator | Zheng, Dongdong Zhang, Kaibing Lu, Jian Jing, Junfeng Xiong, Zenggang |
description | In real application scenarios, the face images captured by cameras often incur blur, illumination variation, occlusion, and low-resolution (LR), which leads to a challenging problem for many real-time face recognition systems due to a big distribution difference between the captured degraded images and the high-resolution (HR) gallery images. As widespread application of transfer learning in across-visual recognition, we propose a novel active discriminative cross-domain alignment (ADCDA) technique for LR face recognition method by jointly exploring both geometrical and statistical properties of the source domain and the target domain in a unique way. Specifically, the proposed ADCDA-based method contains three key components: 1) it simultaneously reduces the domain shift in both marginal distribution and conditional distribution between the source domain and the target domain; 2) it aligns the data of two domains in the common latent subspace by discriminant locality alignment (DLA); 3) it selects the representative and the diverse samples with an active learning strategy to further improve classification performance. Extensive experiments on six benchmark databases verify that the proposed method significantly outperforms other state-of-the-art predecessors. |
doi_str_mv | 10.1109/ACCESS.2020.2996796 |
format | Article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_proquest_journals_2454399816</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9098902</ieee_id><doaj_id>oai_doaj_org_article_7b58a7ad1a5a4721a4040bfd551b4b25</doaj_id><sourcerecordid>2454399816</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-e365b1443c73c8bf0833e26756737365aaaf77d0fe582fc5f1d5dba66bc70f7d3</originalsourceid><addsrcrecordid>eNpNUV1LwzAULaLgmP4CXwo-d-Y7zePoNh0MlKnP4TZNRsbWzLRT_Pdmdoj3JTcn95wc7smyO4wmGCP1MK2q-evrhCCCJkQpIZW4yEYEC1VQTsXlv_46u-26LUpVJojLUfYyNb3_tPnMdyb6vW_h91rF0HXFLOzBt_l05zft3rZ97kLMV-GrWNsu7I69D22-AGPztTVh0_oTcJNdOdh19vZ8jrP3xfyteipWz4_LaroqDENlX1gqeI0Zo0ZSU9YOlZRaIiQXksr0BgBOygY5y0viDHe44U0NQtRGIicbOs6Wg24TYKsPyTvEbx3A618gxI2G2Huzs1rWvAQJDQYOTBIMDDFUu4ZzXLOa8KR1P2gdYvg42q7X23CMbbKvCeOMKpXWlaboMGVOy4nW_f2KkT4loYck9CkJfU4ise4GlrfW_jEUUqVChP4ApvKEOA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2454399816</pqid></control><display><type>article</type><title>Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Zheng, Dongdong ; Zhang, Kaibing ; Lu, Jian ; Jing, Junfeng ; Xiong, Zenggang</creator><creatorcontrib>Zheng, Dongdong ; Zhang, Kaibing ; Lu, Jian ; Jing, Junfeng ; Xiong, Zenggang</creatorcontrib><description>In real application scenarios, the face images captured by cameras often incur blur, illumination variation, occlusion, and low-resolution (LR), which leads to a challenging problem for many real-time face recognition systems due to a big distribution difference between the captured degraded images and the high-resolution (HR) gallery images. As widespread application of transfer learning in across-visual recognition, we propose a novel active discriminative cross-domain alignment (ADCDA) technique for LR face recognition method by jointly exploring both geometrical and statistical properties of the source domain and the target domain in a unique way. Specifically, the proposed ADCDA-based method contains three key components: 1) it simultaneously reduces the domain shift in both marginal distribution and conditional distribution between the source domain and the target domain; 2) it aligns the data of two domains in the common latent subspace by discriminant locality alignment (DLA); 3) it selects the representative and the diverse samples with an active learning strategy to further improve classification performance. Extensive experiments on six benchmark databases verify that the proposed method significantly outperforms other state-of-the-art predecessors.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2020.2996796</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>active learning ; Alignment ; Dimensionality reduction ; discriminant locality alignment (DLA) ; domain adaptation ; Domains ; Face ; Face recognition ; Image reconstruction ; Image resolution ; Kernel ; Learning ; low-resolution (LR) face recognition ; Object recognition ; Occlusion ; Statistical methods ; Target recognition ; Task analysis ; Transfer learning ; Visual discrimination</subject><ispartof>IEEE access, 2020, Vol.8, p.97503-97515</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-e365b1443c73c8bf0833e26756737365aaaf77d0fe582fc5f1d5dba66bc70f7d3</citedby><cites>FETCH-LOGICAL-c408t-e365b1443c73c8bf0833e26756737365aaaf77d0fe582fc5f1d5dba66bc70f7d3</cites><orcidid>0000-0002-3540-2996 ; 0000-0002-3770-017X ; 0000-0003-4467-9599</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9098902$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Zheng, Dongdong</creatorcontrib><creatorcontrib>Zhang, Kaibing</creatorcontrib><creatorcontrib>Lu, Jian</creatorcontrib><creatorcontrib>Jing, Junfeng</creatorcontrib><creatorcontrib>Xiong, Zenggang</creatorcontrib><title>Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition</title><title>IEEE access</title><addtitle>Access</addtitle><description>In real application scenarios, the face images captured by cameras often incur blur, illumination variation, occlusion, and low-resolution (LR), which leads to a challenging problem for many real-time face recognition systems due to a big distribution difference between the captured degraded images and the high-resolution (HR) gallery images. As widespread application of transfer learning in across-visual recognition, we propose a novel active discriminative cross-domain alignment (ADCDA) technique for LR face recognition method by jointly exploring both geometrical and statistical properties of the source domain and the target domain in a unique way. Specifically, the proposed ADCDA-based method contains three key components: 1) it simultaneously reduces the domain shift in both marginal distribution and conditional distribution between the source domain and the target domain; 2) it aligns the data of two domains in the common latent subspace by discriminant locality alignment (DLA); 3) it selects the representative and the diverse samples with an active learning strategy to further improve classification performance. Extensive experiments on six benchmark databases verify that the proposed method significantly outperforms other state-of-the-art predecessors.</description><subject>active learning</subject><subject>Alignment</subject><subject>Dimensionality reduction</subject><subject>discriminant locality alignment (DLA)</subject><subject>domain adaptation</subject><subject>Domains</subject><subject>Face</subject><subject>Face recognition</subject><subject>Image reconstruction</subject><subject>Image resolution</subject><subject>Kernel</subject><subject>Learning</subject><subject>low-resolution (LR) face recognition</subject><subject>Object recognition</subject><subject>Occlusion</subject><subject>Statistical methods</subject><subject>Target recognition</subject><subject>Task analysis</subject><subject>Transfer learning</subject><subject>Visual discrimination</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUV1LwzAULaLgmP4CXwo-d-Y7zePoNh0MlKnP4TZNRsbWzLRT_Pdmdoj3JTcn95wc7smyO4wmGCP1MK2q-evrhCCCJkQpIZW4yEYEC1VQTsXlv_46u-26LUpVJojLUfYyNb3_tPnMdyb6vW_h91rF0HXFLOzBt_l05zft3rZ97kLMV-GrWNsu7I69D22-AGPztTVh0_oTcJNdOdh19vZ8jrP3xfyteipWz4_LaroqDENlX1gqeI0Zo0ZSU9YOlZRaIiQXksr0BgBOygY5y0viDHe44U0NQtRGIicbOs6Wg24TYKsPyTvEbx3A618gxI2G2Huzs1rWvAQJDQYOTBIMDDFUu4ZzXLOa8KR1P2gdYvg42q7X23CMbbKvCeOMKpXWlaboMGVOy4nW_f2KkT4loYck9CkJfU4ise4GlrfW_jEUUqVChP4ApvKEOA</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Zheng, Dongdong</creator><creator>Zhang, Kaibing</creator><creator>Lu, Jian</creator><creator>Jing, Junfeng</creator><creator>Xiong, Zenggang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-3540-2996</orcidid><orcidid>https://orcid.org/0000-0002-3770-017X</orcidid><orcidid>https://orcid.org/0000-0003-4467-9599</orcidid></search><sort><creationdate>2020</creationdate><title>Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition</title><author>Zheng, Dongdong ; Zhang, Kaibing ; Lu, Jian ; Jing, Junfeng ; Xiong, Zenggang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-e365b1443c73c8bf0833e26756737365aaaf77d0fe582fc5f1d5dba66bc70f7d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>active learning</topic><topic>Alignment</topic><topic>Dimensionality reduction</topic><topic>discriminant locality alignment (DLA)</topic><topic>domain adaptation</topic><topic>Domains</topic><topic>Face</topic><topic>Face recognition</topic><topic>Image reconstruction</topic><topic>Image resolution</topic><topic>Kernel</topic><topic>Learning</topic><topic>low-resolution (LR) face recognition</topic><topic>Object recognition</topic><topic>Occlusion</topic><topic>Statistical methods</topic><topic>Target recognition</topic><topic>Task analysis</topic><topic>Transfer learning</topic><topic>Visual discrimination</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Dongdong</creatorcontrib><creatorcontrib>Zhang, Kaibing</creatorcontrib><creatorcontrib>Lu, Jian</creatorcontrib><creatorcontrib>Jing, Junfeng</creatorcontrib><creatorcontrib>Xiong, Zenggang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zheng, Dongdong</au><au>Zhang, Kaibing</au><au>Lu, Jian</au><au>Jing, Junfeng</au><au>Xiong, Zenggang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2020</date><risdate>2020</risdate><volume>8</volume><spage>97503</spage><epage>97515</epage><pages>97503-97515</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>In real application scenarios, the face images captured by cameras often incur blur, illumination variation, occlusion, and low-resolution (LR), which leads to a challenging problem for many real-time face recognition systems due to a big distribution difference between the captured degraded images and the high-resolution (HR) gallery images. As widespread application of transfer learning in across-visual recognition, we propose a novel active discriminative cross-domain alignment (ADCDA) technique for LR face recognition method by jointly exploring both geometrical and statistical properties of the source domain and the target domain in a unique way. Specifically, the proposed ADCDA-based method contains three key components: 1) it simultaneously reduces the domain shift in both marginal distribution and conditional distribution between the source domain and the target domain; 2) it aligns the data of two domains in the common latent subspace by discriminant locality alignment (DLA); 3) it selects the representative and the diverse samples with an active learning strategy to further improve classification performance. Extensive experiments on six benchmark databases verify that the proposed method significantly outperforms other state-of-the-art predecessors.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2020.2996796</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-3540-2996</orcidid><orcidid>https://orcid.org/0000-0002-3770-017X</orcidid><orcidid>https://orcid.org/0000-0003-4467-9599</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2020, Vol.8, p.97503-97515 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_proquest_journals_2454399816 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | active learning Alignment Dimensionality reduction discriminant locality alignment (DLA) domain adaptation Domains Face Face recognition Image reconstruction Image resolution Kernel Learning low-resolution (LR) face recognition Object recognition Occlusion Statistical methods Target recognition Task analysis Transfer learning Visual discrimination |
title | Active Discriminative Cross-Domain Alignment for Low-Resolution Face Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T06%3A12%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Active%20Discriminative%20Cross-Domain%20Alignment%20for%20Low-Resolution%20Face%20Recognition&rft.jtitle=IEEE%20access&rft.au=Zheng,%20Dongdong&rft.date=2020&rft.volume=8&rft.spage=97503&rft.epage=97515&rft.pages=97503-97515&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2020.2996796&rft_dat=%3Cproquest_doaj_%3E2454399816%3C/proquest_doaj_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2454399816&rft_id=info:pmid/&rft_ieee_id=9098902&rft_doaj_id=oai_doaj_org_article_7b58a7ad1a5a4721a4040bfd551b4b25&rfr_iscdi=true |