SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person

Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introdu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hong, Sungeun, Im, Woobin, Ryu, Jongbin, Yang, Hyun S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hong, Sungeun
Im, Woobin
Ryu, Jongbin
Yang, Hyun S
description Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training. However, the SSPP characteristic of one training sample per class is insufficient to train the deep architecture. To overcome this shortage, we generate synthetic images with varying poses using a 3D face model. Experimental evaluations using a realistic SSPP dataset show that deep domain adaptation and image synthesis complement each other and dramatically improve accuracy. Experiments on a benchmark dataset using the proposed approach show state-of-the-art performance. All the dataset and the source code can be found in our online repository (https://github.com/csehong/SSPP-DAN).
doi_str_mv 10.48550/arxiv.1702.04069
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1702_04069</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1702_04069</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-c414943105d243c2452c6b5479f8403cbf036979c2fc79842ba8772d560f5cea3</originalsourceid><addsrcrecordid>eNotj71OwzAYRb0woMIDMNUvkOD4N2aLGgpIVYmabgzRF8cuFk0cuRGFt4cGhqszXOlIB6G7jKQ8F4LcQ_zyn2mmCE0JJ1Jfo7e6rqqkLLYPuLR2xGXowQ-46GCcYPJhwFs7nUP8wC5EvAZj8c6acBj8fJ799I5rPxyOFtfQj7-obLzsFIYbdOXgeLK3_1yg_fpxv3pONq9PL6tik4BUOjE845qzjIiOcmYoF9TIVnClXc4JM60jTGqlDXVG6ZzTFnKlaCckccJYYAu0_NPOdc0YfQ_xu7lUNnMl-wFw0EsY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person</title><source>arXiv.org</source><creator>Hong, Sungeun ; Im, Woobin ; Ryu, Jongbin ; Yang, Hyun S</creator><creatorcontrib>Hong, Sungeun ; Im, Woobin ; Ryu, Jongbin ; Yang, Hyun S</creatorcontrib><description>Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training. However, the SSPP characteristic of one training sample per class is insufficient to train the deep architecture. To overcome this shortage, we generate synthetic images with varying poses using a 3D face model. Experimental evaluations using a realistic SSPP dataset show that deep domain adaptation and image synthesis complement each other and dramatically improve accuracy. Experiments on a benchmark dataset using the proposed approach show state-of-the-art performance. All the dataset and the source code can be found in our online repository (https://github.com/csehong/SSPP-DAN).</description><identifier>DOI: 10.48550/arxiv.1702.04069</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2017-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1702.04069$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1702.04069$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hong, Sungeun</creatorcontrib><creatorcontrib>Im, Woobin</creatorcontrib><creatorcontrib>Ryu, Jongbin</creatorcontrib><creatorcontrib>Yang, Hyun S</creatorcontrib><title>SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person</title><description>Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training. However, the SSPP characteristic of one training sample per class is insufficient to train the deep architecture. To overcome this shortage, we generate synthetic images with varying poses using a 3D face model. Experimental evaluations using a realistic SSPP dataset show that deep domain adaptation and image synthesis complement each other and dramatically improve accuracy. Experiments on a benchmark dataset using the proposed approach show state-of-the-art performance. All the dataset and the source code can be found in our online repository (https://github.com/csehong/SSPP-DAN).</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAYRb0woMIDMNUvkOD4N2aLGgpIVYmabgzRF8cuFk0cuRGFt4cGhqszXOlIB6G7jKQ8F4LcQ_zyn2mmCE0JJ1Jfo7e6rqqkLLYPuLR2xGXowQ-46GCcYPJhwFs7nUP8wC5EvAZj8c6acBj8fJ799I5rPxyOFtfQj7-obLzsFIYbdOXgeLK3_1yg_fpxv3pONq9PL6tik4BUOjE845qzjIiOcmYoF9TIVnClXc4JM60jTGqlDXVG6ZzTFnKlaCckccJYYAu0_NPOdc0YfQ_xu7lUNnMl-wFw0EsY</recordid><startdate>20170213</startdate><enddate>20170213</enddate><creator>Hong, Sungeun</creator><creator>Im, Woobin</creator><creator>Ryu, Jongbin</creator><creator>Yang, Hyun S</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170213</creationdate><title>SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person</title><author>Hong, Sungeun ; Im, Woobin ; Ryu, Jongbin ; Yang, Hyun S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-c414943105d243c2452c6b5479f8403cbf036979c2fc79842ba8772d560f5cea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Hong, Sungeun</creatorcontrib><creatorcontrib>Im, Woobin</creatorcontrib><creatorcontrib>Ryu, Jongbin</creatorcontrib><creatorcontrib>Yang, Hyun S</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hong, Sungeun</au><au>Im, Woobin</au><au>Ryu, Jongbin</au><au>Yang, Hyun S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person</atitle><date>2017-02-13</date><risdate>2017</risdate><abstract>Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training. However, the SSPP characteristic of one training sample per class is insufficient to train the deep architecture. To overcome this shortage, we generate synthetic images with varying poses using a 3D face model. Experimental evaluations using a realistic SSPP dataset show that deep domain adaptation and image synthesis complement each other and dramatically improve accuracy. Experiments on a benchmark dataset using the proposed approach show state-of-the-art performance. All the dataset and the source code can be found in our online repository (https://github.com/csehong/SSPP-DAN).</abstract><doi>10.48550/arxiv.1702.04069</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1702.04069
ispartof
issn
language eng
recordid cdi_arxiv_primary_1702_04069
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T03%3A48%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SSPP-DAN:%20Deep%20Domain%20Adaptation%20Network%20for%20Face%20Recognition%20with%20Single%20Sample%20Per%20Person&rft.au=Hong,%20Sungeun&rft.date=2017-02-13&rft_id=info:doi/10.48550/arxiv.1702.04069&rft_dat=%3Carxiv_GOX%3E1702_04069%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true