Generisch-Net: A Generic Deep Model for Analyzing Human Motion with Wearable Sensors in the Internet of Health Things

The Internet of Health Things (IoHT) is a broader version of the Internet of Things. The main goal is to intervene autonomously from geographically diverse regions and provide low-cost preventative or active healthcare treatments. Smart wearable IMUs for human motion analysis have proven to provide...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2024-09, Vol.24 (19), p.6167
Hauptverfasser: Hamza, Kiran, Riaz, Qaiser, Imran, Hamza Ali, Hussain, Mehdi, Krüger, Björn
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Internet of Health Things (IoHT) is a broader version of the Internet of Things. The main goal is to intervene autonomously from geographically diverse regions and provide low-cost preventative or active healthcare treatments. Smart wearable IMUs for human motion analysis have proven to provide valuable insights into a person's psychological state, activities of daily living, identification/re-identification through gait signatures, etc. The existing literature, however, focuses on specificity i.e., problem-specific deep models. This work presents a generic BiGRU-CNN deep model that can predict the emotional state of a person, classify the activities of daily living, and re-identify a person in a closed-loop scenario. For training and validation, we have employed publicly available and closed-access datasets. The data were collected with wearable inertial measurement units mounted non-invasively on the bodies of the subjects. Our findings demonstrate that the generic model achieves an impressive accuracy of 96.97% in classifying activities of daily living. Additionally, it re-identifies individuals in closed-loop scenarios with an accuracy of 93.71% and estimates emotional states with an accuracy of 78.20%. This study represents a significant effort towards developing a versatile deep-learning model for human motion analysis using wearable IMUs, demonstrating promising results across multiple applications.
ISSN:1424-8220
1424-8220
DOI:10.3390/s24196167