A Multimodal Feature Fusion Framework for Sleep-Deprived Fatigue Detection to Prevent Accidents

Sleep-deprived fatigued person is likely to commit more errors that may even prove to be fatal. Thus, it is necessary to recognize this fatigue. The novelty of the proposed research work for the detection of this fatigue is that it is nonintrusive and based on multimodal feature fusion. In the propo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2023-04, Vol.23 (8), p.4129
Hauptverfasser: Virk, Jitender Singh, Singh, Mandeep, Panjwani, Usha, Ray, Koushik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Sleep-deprived fatigued person is likely to commit more errors that may even prove to be fatal. Thus, it is necessary to recognize this fatigue. The novelty of the proposed research work for the detection of this fatigue is that it is nonintrusive and based on multimodal feature fusion. In the proposed methodology, fatigue is detected by obtaining features from four domains: visual images, thermal images, keystroke dynamics, and voice features. In the proposed methodology, the samples of a volunteer (subject) are obtained from all four domains for feature extraction, and empirical weights are assigned to the four different domains. Young, healthy volunteers ( = 60) between the age group of 20 to 30 years participated in the experimental study. Further, they abstained from the consumption of alcohol, caffeine, or other drugs impacting their sleep pattern during the study. Through this multimodal technique, appropriate weights are given to the features obtained from the four domains. The results are compared with k-nearest neighbors (kNN), support vector machines (SVM), random tree, random forest, and multilayer perceptron classifiers. The proposed nonintrusive technique has obtained an average detection accuracy of 93.33% in 3-fold cross-validation.
ISSN:1424-8220
1424-8220
DOI:10.3390/s23084129