Acoustic Modelling From Raw Source and Filter Components for Dysarthric Speech Recognition

Acoustic modelling for automatic dysarthric speech recognition (ADSR) is a challenging task. Data deficiency is a major problem and substantial differences between typical and dysarthric speech complicate the transfer learning. In this paper, we aim at building acoustic models using the raw magnitud...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2022, Vol.30, p.2968-2980
Hauptverfasser: Yue, Zhengjun, Loweimi, Erfan, Christensen, Heidi, Barker, Jon, Cvetkovic, Zoran
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Acoustic modelling for automatic dysarthric speech recognition (ADSR) is a challenging task. Data deficiency is a major problem and substantial differences between typical and dysarthric speech complicate the transfer learning. In this paper, we aim at building acoustic models using the raw magnitude spectra of the source and filter components for ADSR. The proposed multi-stream models consist of convolutional, recurrent and fully-connected layers allowing for pre-processing various information streams and fusing them at an optimal level of abstraction. We demonstrate that such a multi-stream processing leverages information encoded in the vocal tract and excitation components and leads to normalising nuisance factors such as speaker attributes and speaking style. This leads to a better handling of dysarthric speech that exhibits large inter- and intra-speaker variabilities and results in a notable performance gain. Furthermore, we analyse the learned convolutional filters and visualise the outputs of different layers after dimensionality reduction to demonstrate how the speaker-related attributes are normalised along the pipeline. We also compare the proposed multi-stream model with various systems based on MFCC, FBank, raw waveform and i-vector, and, study the training dynamics as well as usefulness of the feature normalisation and data augmentation via speed perturbation. On the widely used TORGO and UASpeech dysarthric speech corpora, the proposed approach leads to a competitive performance of up to 35.3% and 30.3% WERs for dysarthric speech, respectively.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2022.3205766