SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person
Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introdu...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Real-world face recognition using a single sample per person (SSPP) is a
challenging task. The problem is exacerbated if the conditions under which the
gallery image and the probe set are captured are completely different. To
address these issues from the perspective of domain adaptation, we introduce an
SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain
adaptation, feature extraction, and classification are performed jointly using
a deep architecture with domain-adversarial training. However, the SSPP
characteristic of one training sample per class is insufficient to train the
deep architecture. To overcome this shortage, we generate synthetic images with
varying poses using a 3D face model. Experimental evaluations using a realistic
SSPP dataset show that deep domain adaptation and image synthesis complement
each other and dramatically improve accuracy. Experiments on a benchmark
dataset using the proposed approach show state-of-the-art performance. All the
dataset and the source code can be found in our online repository
(https://github.com/csehong/SSPP-DAN). |
---|---|
DOI: | 10.48550/arxiv.1702.04069 |