ParrotTTS: Text-to-Speech synthesis by exploiting self-supervised representations
We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to lan...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present ParrotTTS, a modularized text-to-speech synthesis model leveraging
disentangled self-supervised speech representations. It can train a
multi-speaker variant effectively using transcripts from a single speaker.
ParrotTTS adapts to a new language in low resource setup and generalizes to
languages not seen while training the self-supervised backbone. Moreover,
without training on bilingual or parallel examples, ParrotTTS can transfer
voices across languages while preserving the speaker specific characteristics,
e.g., synthesizing fluent Hindi speech using a French speaker's voice and
accent. We present extensive results in monolingual and multi-lingual
scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models
using only a fraction of paired data as latter. |
---|---|
DOI: | 10.48550/arxiv.2303.01261 |