Text-to-speech synthesis from dark data with evaluation-in-the-loop data selection
This paper proposes a method for selecting training data for text-to-speech (TTS) synthesis from dark data. TTS models are typically trained on high-quality speech corpora that cost much time and money for data collection, which makes it very challenging to increase speaker variation. In contrast, t...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes a method for selecting training data for text-to-speech
(TTS) synthesis from dark data. TTS models are typically trained on
high-quality speech corpora that cost much time and money for data collection,
which makes it very challenging to increase speaker variation. In contrast,
there is a large amount of data whose availability is unknown (a.k.a, "dark
data"), such as YouTube videos. To utilize data other than TTS corpora,
previous studies have selected speech data from the corpora on the basis of
acoustic quality. However, considering that TTS models robust to data noise
have been proposed, we should select data on the basis of its importance as
training data to the given TTS model, not the quality of speech itself. Our
method with a loop of training and evaluation selects training data on the
basis of the automatically predicted quality of synthetic speech of a given TTS
model. Results of evaluations using YouTube data reveal that our method
outperforms the conventional acoustic-quality-based method. |
---|---|
DOI: | 10.48550/arxiv.2210.14850 |