Landmark Free Face Attribute Prediction

Face attribute prediction in the wild is important for many facial analysis applications, yet it is very challenging due to ubiquitous face variations. In this paper, we address face attribute prediction in the wild by proposing a novel method, lAndmark Free Face AttrIbute pRediction (AFFAIR). Unlik...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2018-09, Vol.27 (9), p.4651-4662
Hauptverfasser: Jianshu Li, Fang Zhao, Jiashi Feng, Roy, Sujoy, Shuicheng Yan, Sim, Terence
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face attribute prediction in the wild is important for many facial analysis applications, yet it is very challenging due to ubiquitous face variations. In this paper, we address face attribute prediction in the wild by proposing a novel method, lAndmark Free Face AttrIbute pRediction (AFFAIR). Unlike traditional face attribute prediction methods that require facial landmark detection and face alignment, AFFAIR uses an end-to-end learning pipeline to jointly learn a hierarchy of spatial transformations that optimize facial attribute prediction with no reliance on landmark annotations or pre-trained landmark detectors. AFFAIR achieves this through simultaneously: 1) learning a global transformation which effectively alleviates negative effect of global face variation for the following attribute prediction tailored for each face; 2) locating the most relevant facial part for attribute prediction; and 3) aggregating the global and local features for robust attribute prediction. Within AFFAIR, a new competitive learning strategy is developed that effectively enhances global transformation learning for better attribute prediction. We show that with zero information about landmarks, AFFAIR achieves the state-of-the-art performance on three face attribute prediction benchmarks, which simultaneously learns the face-level transformation and attribute-level localization within a unified framework.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2018.2839521