Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Incremental text-to-speech (TTS) synthesis generates utterances in small linguistic units for the sake of real-time and low-latency applications. We previously proposed an incremental TTS method that leverages a large pre-trained language model to take unobserved future context into account without...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Incremental text-to-speech (TTS) synthesis generates utterances in small
linguistic units for the sake of real-time and low-latency applications. We
previously proposed an incremental TTS method that leverages a large
pre-trained language model to take unobserved future context into account
without waiting for the subsequent segment. Although this method achieves
comparable speech quality to that of a method that waits for the future
context, it entails a huge amount of processing for sampling from the language
model at each time step. In this paper, we propose an incremental TTS method
that directly predicts the unobserved future context with a lightweight model,
instead of sampling words from the large-scale language model. We perform
knowledge distillation from a GPT2-based context prediction network into a
simple recurrent model by minimizing a teacher-student loss defined between the
context embedding vectors of those models. Experimental results show that the
proposed method requires about ten times less inference time to achieve
comparable synthetic speech quality to that of our previous method, and it can
perform incremental synthesis much faster than the average speaking speed of
human English speakers, demonstrating the availability of our method to
real-time applications. |
---|---|
DOI: | 10.48550/arxiv.2109.10724 |