Shrink boost for selecting multi-LBP histogram features in object detection

Feature selection from sparse and high dimension features using conventional greedy based boosting gives classifiers of poor generalization. We propose a novel "shrink boost" method to address this problem. It solves a sparse regularization problem with two iterative steps. First, a "...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cher Keng Heng, Yokomitsu, S., Matsumoto, Y., Tamura, H.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Feature selection from sparse and high dimension features using conventional greedy based boosting gives classifiers of poor generalization. We propose a novel "shrink boost" method to address this problem. It solves a sparse regularization problem with two iterative steps. First, a "boosting" step uses weighted training samples to learn a full high dimensional classifier on all features. This avoids over fitting to few features and improves generalization. Next, a "shrinkage" step shrinks least discriminative classifier dimension to zero to remove the redundant features. In our object detection system, we use "shrink boost" to select sparse features from histograms of local binary pattern (LBP) of multiple quantization and image channels to learn classifier of additive lookup tables (LUT). Our evaluation shows that our classifier has much better generalization than those from greedy based boosting and those from SVM methods, even under limited number of train samples. On public dataset of human detection and pedestrian detection, we achieve better performance than state of the arts. On our more challenging dataset of bird detection, we show promising results.
ISSN:1063-6919
DOI:10.1109/CVPR.2012.6248061