Healthcare System from Multisensor Collaboration and Human Action Recognition
Over the past few decades, wearable sensor technology has played a pivotal role in patient information acquisition. A new paradigm for unconstrained medical data collection emerged with noncontact sensors such as Kinect and industrial-grade RGB cameras. These innovations have enhanced patient experi...
Gespeichert in:
Veröffentlicht in: | Sensors and materials 2024-08, Vol.36 (8), p.3239 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Over the past few decades, wearable sensor technology has played a pivotal role in patient information acquisition. A new paradigm for unconstrained medical data collection emerged with noncontact sensors such as Kinect and industrial-grade RGB cameras. These innovations have enhanced patient experiences and offer vast potential for long-term monitoring, telemedicine, and remote healthcare in underserved areas. However, while many research efforts have zeroed in on specific components, such as sensor development or algorithm design, this has occasionally led to noncontact systems' reliability, accuracy, and connectivity challenges. Consequently, there is a pressing need for research that adopts a more holistic approach, ensuring the optimal integration of sensors and algorithms. In this study, we introduce a method of constructing a noncontact diagnostic system powered by deep learning vision algorithms that showcase strong resilience against viewpoint changes and obstructions in human motion identification and assessment. Using four RGB cameras, we capture human dynamics and leverage a pose estimator to generate comprehensive 3D human postures. Afterward, these postures are refined to aid in subsequent behavior prediction. Our multitask-trained model significantly bolsters the system's adaptability to posture discrepancies. Notably, this noncontact diagnostic system thrives in challenging environments, such as 360-degree surveillance, intricate situations, and low-light settings where traditional sensors often falter. In addition, we have assembled a multiview autism behavior dataset. Through it, our embedded deep learning algorithm showcases exemplary action category recognition (reaching up to 95.09%), highlighting further its practical implications. |
---|---|
ISSN: | 0914-4935 2435-0869 |
DOI: | 10.18494/SAM4786 |