Encoding Kinematic and Temporal Gait Data in an Appearance-Based Feature for the Automatic Classification of Autism Spectrum Disorder

In appearance-based gait analysis studies, Gait Energy Images (GEI) have been shown to be an effective tool for human identification and gait pathology detection. In addition, model-based studies found kinematic and spatio-temporal features to be useful for gait recognition and Autism Spectrum Disor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.134100-134117
Hauptverfasser: Henderson, B., Yogarajah, Pratheepan, Gardiner, Bryan, McGinnity, T. Martin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In appearance-based gait analysis studies, Gait Energy Images (GEI) have been shown to be an effective tool for human identification and gait pathology detection. In addition, model-based studies found kinematic and spatio-temporal features to be useful for gait recognition and Autism Spectrum Disorder (ASD) classification. Adapting the GEI to focus on the strong ASD features would improve the early screening of ASD by allowing the use of powerful appearance-based classifiers such as Convolutional Neural Networks (CNN). This paper introduces an enhanced GEI, by averaging images from a video sequence to produce a single image but by retention of a person's joint positions only, instead of the full body silhouettes. Depth is encoded into the binary images before they are averaged using colour mapping, a technique used in the Chrono-Gait Image. The Joint Energy Image (JEI) therefore embeds both the temporal and depth information of the joints into a 2D image. The image was preprocessed using Principal Component Analysis before being applied to a Multi-Layer Perceptron, and a Random Forest classifier. The JEI was also applied to a CNN directly and accuracy was improved when using a Test Time Augmentation (TTA) measure. The CNN achieved a TTA accuracy of 95.56% when trained on a primary dataset of 100 subjects (50 with ASD and 50 that are typically developed), and 80% TTA accuracy on a secondary dataset of 20 subjects (10 ASD and 10 typically developed) across multiple tests.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3336861