An end-to-end TTS model with pronunciation predictor

Recent end-to-end TTS models generate human-like natural speech in real-time, but they produce pronunciation errors which cause the degradation of the naturalness of synthesized speech. In this paper, we investigate a method to alleviate the mispronunciation problem, one of the challenges in end-to-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of speech technology 2022, Vol.25 (4), p.1013-1024
Hauptverfasser: Han, Chol-Jin, Ri, Un-Chol, Mun, Song-Il, Jang, Kang-Song, Han, Song-Yun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent end-to-end TTS models generate human-like natural speech in real-time, but they produce pronunciation errors which cause the degradation of the naturalness of synthesized speech. In this paper, we investigate a method to alleviate the mispronunciation problem, one of the challenges in end-to-end TTS. To address this problem, we propose a novel framework that incorporates a pronunciation predictor, which predicts the corresponding phoneme sequence given character sequence, into the encoder of the end-to-end TTS model. Our model is based on non-autoregressive feed-forward Transformer, which is able to generate the mel-spectrogram in parallel, and the pronunciation predictor has also feed-forward architecture. Motivated by the idea that the pronunciation errors of end-to-end model is caused due to the limited and unbalanced lexical coverage of training data, a two-stage training scheme involving pre-training of the pronunciation predictor with a large-scale language dataset is also proposed. Experimental results showed that our model outperforms FastSpeech in the naturalness assessment as well as the phoneme error rate dropped from 8.7 to 1.4%. From the experimental results, we also found that using the pronunciation information is efficient for duration prediction.
ISSN:1381-2416
1572-8110
DOI:10.1007/s10772-022-10008-7