EVALUATING THE PERFORMANCE OF DEEP SUPERVISED AUTO ENCODER IN SINGLE SAMPLE FACE RECOGNITION PROBLEM USING KULLBACK-LEIBLER DIVERGENCE SPARSITY REGULARIZER

Recent development on supervised auto encoder research gives promising solutions toward single sample face recognition problems. In this research, Kullback-Leibler Divergence (KLD) approach is proposed to obtain penalty of sparsity constraint for deep auto encoder learning process. This approach is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Theoretical and Applied Information Technology 2016-05, Vol.87 (2), p.255-255
Hauptverfasser: Viktorisa, Otniel Y, Wasito, Ito, Syafiandini, Arida F
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent development on supervised auto encoder research gives promising solutions toward single sample face recognition problems. In this research, Kullback-Leibler Divergence (KLD) approach is proposed to obtain penalty of sparsity constraint for deep auto encoder learning process. This approach is tested using two datasets, Extended Yale B (cropped version) and LFWcrop. For comparison, Log and εL^sub 1^ also employed as sparsity regularizers. Experiment results confirm that KLD has better performance in image classification of both datasets compared to Log and εL^sub 1^.
ISSN:1817-3195