DriSm_YNet: a breakthrough in real-time recognition of driver smoking behavior using YOLO-NAS

Driver smoking rates are rising day after day. This becomes more crucial when operating a vehicle because of the number of deadly traffic accidents caused by this careless behavior. Therefore, to overcome this problem, there is a need for a reliable system that can work in less time and with reasona...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2024-10, Vol.36 (29), p.18413-18432
Hauptverfasser: Pandey, Nageshwar Nath, Pati, Avadh, Maurya, Ritesh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Driver smoking rates are rising day after day. This becomes more crucial when operating a vehicle because of the number of deadly traffic accidents caused by this careless behavior. Therefore, to overcome this problem, there is a need for a reliable system that can work in less time and with reasonable accuracy. Here, the motive of this study is to satisfy this demand. In this study, a vast video dataset named HMDB51 has been utilized. Further, this dataset is pre-processed with image enhancement techniques, i.e., histogram utilization and gamma correction. After that, computer vision techniques, i.e., Haar Cascade and YOLO-NAS, have been employed to capture the face, mouth, and eye regions of interest, respectively. Further, an occlusion condition is formulated based on eye parameters to discard the occluded frames at the initial processing stage. Afterward, the TransGAN-augmented technique was employed to avoid the possibility of underfitting that might occur due to the removal of occluded frames. Thereafter, spatio-temporal features were extracted from mouth ROI by the InceptionV3 and fed into LSTM to classify the smoking and non-smoking states of the driver. Thus, the proposed model has categorized the smoking and non-smoking conditions of the driver with a remarkable accuracy of 96.5% based on the AUC-ROC score and confusion metrics compared to existing models.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-024-10162-w