Group-of-features relevance in multinomial kernel logistic regression and application to human interaction recognition

•Human interaction recognition (HIR).•Sparse models for classification.•Group-of-features relevance in multinomial kernel logistic regression (GFR-MKLR).•Interaction description through joint geometric and spectral features. We propose an approach for human interaction recognition (HIR) in videos us...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2020-06, Vol.148, p.113247, Article 113247
Hauptverfasser: Ouyed, Ouiza, Allili, Mohand Said
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Human interaction recognition (HIR).•Sparse models for classification.•Group-of-features relevance in multinomial kernel logistic regression (GFR-MKLR).•Interaction description through joint geometric and spectral features. We propose an approach for human interaction recognition (HIR) in videos using multinomial kernel logistic regression with group-of-features relevance (GFR-MKLR). Our approach couples kernel and group sparsity modelling to ensure highly precise interaction classification. The group structure in GFR-MKLR is chosen to reflect a representation of interactions at the level of gestures, which ensures more robustness to intra-class variability due to occlusions and changes in subject appearance, body size and viewpoint. The groups consist of motion features extracted from tracking interacting persons joints over time. We encode group sparsity in GFR-MKLR through relevance weights reflecting each group (gesture) discrimination capability between different interaction categories. These weights are automatically estimated during GFR-MKLR training using gradient descent minimisation. Our model is computationally efficient and can be trained on a small training dataset while maintaining a good generalization and interpretation capabilities. Experiments on the well-known UT-Interaction dataset have demonstrated the performance of our approach by comparison with state-of-art methods.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113247