Extended linear regression for undersampled face recognition

•Linear Regression Classification (LRC) is extended to undersampled situation.•Intraclass variant dictionary is adopted to represent training/testing variation.•Quasi-inverse, ridge regularization and SVD, are designed to solve low-rank problem.•Experiments show that ELRC has better generalization a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of visual communication and image representation 2014-10, Vol.25 (7), p.1800-1809
Hauptverfasser: Chen, Si-Bao, Ding, Chris H.Q., Luo, Bin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Linear Regression Classification (LRC) is extended to undersampled situation.•Intraclass variant dictionary is adopted to represent training/testing variation.•Quasi-inverse, ridge regularization and SVD, are designed to solve low-rank problem.•Experiments show that ELRC has better generalization ability and is more robust. Linear Regression Classification (LRC) is a newly-appeared pattern recognition method, which formulates the recognition problem in terms of class-specific linear regression with sufficient training samples per class. In this paper, we extend LRC via intraclass variant dictionary and SVD to undersampled face recognition where there are very few, or even only one, training sample per class. Intraclass variant dictionary is adopted in undersampled situation to represent the possible variation between the training and testing samples. Three types of methods, quasi-inverse, ridge regularization and Singular Value Decomposition (SVD), are designed to solve low-rank problem of data matrix. Then the whole algorithm, named Extended LRC (ELRC), is presented for face recognition via intraclass variant dictionary and SVD. The experimental results on three well-known face databases show that the proposed ELRC has better generalization ability and is more robust to classification than many state-of-the-art methods in undersampled situation.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2014.07.007