A deep learning approach for face recognition based on angularly discriminative features

•A novel technique to enhance the proficiency of deep face recognition system.•Improving softmax by margining both multiplicative angular and additive cosine margin.•Most significant way to use these margins by not dividing into piece-wise function.•For testing, datasets have massive variation on ag...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition letters 2019-12, Vol.128, p.414-419
Hauptverfasser: Iqbal, Mansoor, Sameem, M. Shujah Islam, Naqvi, Nuzhat, Kanwal, Shamsa, Ye, Zhongfu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A novel technique to enhance the proficiency of deep face recognition system.•Improving softmax by margining both multiplicative angular and additive cosine margin.•Most significant way to use these margins by not dividing into piece-wise function.•For testing, datasets have massive variation on age, pose and expression in face images.•Results has achieved on widely used benchmark datasets with low computational power. Face recognition in digital images or video frames has several real-world applications in the modern zone of computer vision. Loss function plays a vital role in deep face recognition. Recently, several loss functions have been proposed in classification techniques to reduce the number of model errors. Among several loss functions, softmax loss implemented either multiplicative angular or additive cosine margin. These individual margins have less capacity to reduce the model`s errors. To fill this gap; we proposed a hybrid angularly discriminative features by combining multiplicative angular and additive cosine margin to improve the efficiency of angular softmax loss and large margin cosine. We trained proposed model using CASIA-WebFace dataset and testing has been performed on Labeled Face in the Wild (LFW), YouTube Faces (YTF), VGGFace1 and VGGFace2. The experimental result shows 99.77% accuracy on LFW dataset whereas 96.40% accuracy achieved on YTF dataset which is higher than the existing similar state-of-the-art techniques.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2019.10.002