Enhancing automated lower limb rehabilitation exercise task recognition through multi-sensor data fusion in tele-rehabilitation
Tele-rehabilitation is the provision of physiotherapy services to individuals in their own homes. Activity recognition plays a crucial role in the realm of automatic tele-rehabilitation. By assessing patient movements, identifying exercises, and providing feedback, these platforms can offer insightf...
Gespeichert in:
Veröffentlicht in: | Biomedical engineering online 2024-03, Vol.23 (1), p.35-35, Article 35 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tele-rehabilitation is the provision of physiotherapy services to individuals in their own homes. Activity recognition plays a crucial role in the realm of automatic tele-rehabilitation. By assessing patient movements, identifying exercises, and providing feedback, these platforms can offer insightful information to clinicians, thereby facilitating an improved plan of care. This study introduces a novel deep learning approach aimed at identifying lower limb rehabilitation exercises. This is achieved through the integration of depth data and pressure heatmaps. We hypothesized that combining pressure heatmaps and depth data could improve the model's overall performance.
In this study, depth videos and body pressure data from an accessible online dataset were used. This dataset comprises data from 30 healthy individuals performing 7 lower limb rehabilitation exercises. To accomplish the classification task, three deep learning models were developed, all based on an established 3D-CNN architecture. The models were designed to classify the depth videos, sequences of pressure data frames, and combination of depth videos and pressure frames. The models' performance was assessed through leave-one-subject-out and leave-multiple-subjects-out cross-validation methods. Performance metrics, including accuracy, precision, recall, and F1 score, were reported for each model.
Our findings indicated that the model trained on the fusion of depth and pressure data showed the highest and most stable performance when compared with models using individual modality inputs. This model could effectively identify the exercises with an accuracy of 95.71%, precision of 95.83%, recall of 95.71%, and an F1 score of 95.74%.
Our results highlight the impact of data fusion for accurately classifying lower limb rehabilitation exercises. We showed that our model could capture different aspects of exercise movements using the visual and weight distribution data from the depth camera and pressure mat, respectively. This integration of data provides a better representation of exercise patterns, leading to higher classification performance. Notably, our results indicate the potential application of this model in automatic tele-rehabilitation platforms. |
---|---|
ISSN: | 1475-925X 1475-925X |
DOI: | 10.1186/s12938-024-01228-w |