A dual-stream fused neural network for fall detection in multi-camera and 360∘ videos
Globally, human falls are the second leading cause of deaths induced due to unintentional injuries. These fatalities, in most cases, arise due to a lack of timely medication. Therefore, over the years, there has been an immense demand for systems that can quickly send fall-related information to the...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2022, Vol.34 (2), p.1455-1482 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Globally, human falls are the second leading cause of deaths induced due to unintentional injuries. These fatalities, in most cases, arise due to a lack of timely medication. Therefore, over the years, there has been an immense demand for systems that can quickly send fall-related information to the caretakers so that the medical relief team can reach on time. The traditional schemes for fall detection using wearable sensors such as accelerometers, gyroscopes, etc., are highly intrusive and generate high false positives in real-world conditions. Consequently, the current research directions in this domain have been toward harnessing the availability of low-cost vision sensors and the power of deep learning. To this end, in this work, we present a dual-stream fused neural network (DSFNN) for fall detection in multi-camera and
360
∘
video streams. The DSFNN model learns to extract spatial-temporal information using two neural networks, trained independently on the RGB video sequences of fall and non-fall activities and their corresponding single dynamic images. Once trained, the model fuses the prediction scores of the two neural networks using a weighted fusion scheme to obtain the final decision. We assessed the performance of the proposed DSFNN on two multi-camera fall datasets, namely UP-Fall and URFD, and on a new in-house
360
∘
video dataset of fall and non-fall activities. The evaluation results in terms of different performance metrics demonstrated the superiority of the proposed fall detection scheme. The framework achieved superior performance and outperformed the previous state-of-the-art fall detection methods. For further research and analysis in the fall detection domain, we will make the source code and the in-house fall dataset available to the research community on request. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-021-06495-5 |