Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism

Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transfor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on biometrics, behavior, and identity science behavior, and identity science, 2022-07, Vol.4 (3), p.439-450
Hauptverfasser: Wang, Zhuo, Wang, Qiangchang, Deng, Weihong, Guo, Guodong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS.
ISSN:2637-6407
2637-6407
DOI:10.1109/TBIOM.2022.3184500