Two-stage dual-resolution face network for cross-resolution face recognition in surveillance systems

Face recognition for surveillance remains a complex challenge due to the disparity between low-resolution (LR) face images captured by surveillance cameras and the typically high-resolution (HR) face images in databases. To address this cross-resolution face recognition problem, we propose a two-sta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Visual computer 2024-08, Vol.40 (8), p.5545-5556
Hauptverfasser: Chen, Liangqin, Chen, Jiwang, Xu, Zhimeng, Liao, Yipeng, Chen, Zhizhang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face recognition for surveillance remains a complex challenge due to the disparity between low-resolution (LR) face images captured by surveillance cameras and the typically high-resolution (HR) face images in databases. To address this cross-resolution face recognition problem, we propose a two-stage dual-resolution face network to learn more robust resolution-invariant representations. In the first stage, we pre-train the proposed dual-resolution face network using solely HR images. Our network utilizes a two-branch structure and introduces bilateral connections to fuse the high- and low-resolution features extracted by two branches, respectively. In the second stage, we introduce the triplet loss as the fine-tuning loss function and design a training strategy that combines the triplet loss with competence-based curriculum learning. According to the competence function, the pre-trained model can train first from easy sample sets and gradually progress to more challenging ones. Our method achieves a remarkable face verification accuracy of 99.25% on the native cross-quality dataset SCFace and 99.71% on the high-quality dataset LFW. Moreover, our method also enhances the face verification accuracy on the native low-quality dataset.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-023-03121-4