Action recognition using kinematics posture feature on 3D skeleton joint locations

•Motion information from skeleton can be efficiently encoded by considering the body joints as wearable kinematics sensor.•Proposed Linear Joint Position Feature (LJPF) and Angular Joint Position Feature (AJPF) from positions and bone-angles.•Proposed Kinematics Posture Feature (KPF) from LJPF and A...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition letters 2021-05, Vol.145, p.216-224
Hauptverfasser: Ahad, Md Atiqur Rahman, Ahmed, Masud, Das Antar, Anindya, Makihara, Yasushi, Yagi, Yasushi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Motion information from skeleton can be efficiently encoded by considering the body joints as wearable kinematics sensor.•Proposed Linear Joint Position Feature (LJPF) and Angular Joint Position Feature (AJPF) from positions and bone-angles.•Proposed Kinematics Posture Feature (KPF) from LJPF and AJPF to encode motion and posture variation across frames.•Proposed Position-based Statistical Feature (PSF) from segmented KPF, that consists of temporal statistical features.•Five benchmark datasets are explored on PSF features with statistical and deep learning-based models. Action recognition is a very widely explored research area in computer vision and related fields. We propose Kinematics Posture Feature (KPF) extraction from 3D joint positions based on skeleton data for improving the performance of action recognition. In this approach, we consider the skeleton 3D joints as kinematics sensors. We propose Linear Joint Position Feature (LJPF) and Angular Joint Position Feature (AJPF) based on 3D linear joint positions and angles between bone segments. We then combine these two kinematics features for each video frame for each action to create the KPF feature sets. These feature sets encode the variation of motion in the temporal domain as if each body joint represents kinematics position and orientation sensors. In the next stage, we process the extracted KPF feature descriptor by using a low pass filter, and segment them by using sliding windows with optimized length. This concept resembles the approach of processing kinematics sensor data. From the segmented windows, we compute the Position-based Statistical Feature (PSF). These features consist of temporal domain statistical features (e.g., mean, standard deviation, variance, etc.). These statistical features encode the variation of postures (i.e., joint positions and angles) across the video frames. For performing classification, we explore Support Vector Machine (Linear), RNN, CNNRNN, and ConvRNN model. The proposed PSF feature sets demonstrate prominent performance in both statistical machine learning- and deep learning-based models. For evaluation, we explore five benchmark datasets namely UTKinect-Action3D, Kinect Activity Recognition Dataset (KARD), MSR 3D Action Pairs, Florence 3D, and Office Activity Dataset (OAD). To prevent overfitting, we consider the leave-one-subject-out framework as the experimental setup and perform 10-fold cross-validation. Our approach outperforms several existing methods
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2021.02.013