Audio–visual speech recognition based on regulated transformer and spatio–temporal fusion strategy for driver assistive systems

This article presents a research methodology for audio–visual speech recognition (AVSR) in driver assistive systems. These systems necessitate ongoing interaction with drivers while driving through voice control for safety reasons. The article introduces a novel audio–visual speech command recogniti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2024-10, Vol.252, p.124159, Article 124159
Hauptverfasser: Ryumin, Dmitry, Axyonov, Alexandr, Ryumina, Elena, Ivanko, Denis, Kashevnik, Alexey, Karpov, Alexey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article presents a research methodology for audio–visual speech recognition (AVSR) in driver assistive systems. These systems necessitate ongoing interaction with drivers while driving through voice control for safety reasons. The article introduces a novel audio–visual speech command recognition transformer (AVCRFormer) specifically designed for robust AVSR. We propose (i) a multimodal fusion strategy based on spatio–temporal fusion of audio and video feature matrices, (ii) a regulated transformer based on iterative model refinement module with multiple encoders, (iii) a classifier ensemble strategy based on multiple decoders. The spatio–temporal fusion strategy preserves contextual information of both modalities and achieves their synchronization. An iterative model refinement module can bridge the gap between acoustic and visual data by leveraging their impact on speech recognition accuracy. The proposed multi-prediction strategy demonstrates superior performance compared to traditional single-prediction strategy, showcasing the model’s adaptability across diverse audio–visual contexts. The transformer proposed has achieved the highest values of speech command recognition accuracy, reaching 98.87% and 98.81% on the RUSAVIC and LRW corpora, respectively. This research has significant implications for advancing human–computer interaction. The capabilities of AVCRFormer extend beyond AVSR, making it a valuable contribution to the intersection of audio–visual processing and artificial intelligence. •A novel transformer-based method for audio–visual speech command recognition.•Novel fusion strategies of audio–visual features and classifier ensemble.•An attention visualization approach for audio–visual feature impact assessment.•A software application of the transformer-based method for driver assistive systems.
ISSN:0957-4174
DOI:10.1016/j.eswa.2024.124159