RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
We present RALL-E, a robust language modeling method for text-to-speech (TTS) synthesis. While previous work based on large language models (LLMs) shows impressive performance on zero-shot TTS, such methods often suffer from poor robustness, such as unstable prosody (weird pitch and rhythm/duration)...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present RALL-E, a robust language modeling method for text-to-speech (TTS)
synthesis. While previous work based on large language models (LLMs) shows
impressive performance on zero-shot TTS, such methods often suffer from poor
robustness, such as unstable prosody (weird pitch and rhythm/duration) and a
high word error rate (WER), due to the autoregressive prediction style of
language models. The core idea behind RALL-E is chain-of-thought (CoT)
prompting, which decomposes the task into simpler steps to enhance the
robustness of LLM-based TTS. To accomplish this idea, RALL-E first predicts
prosody features (pitch and duration) of the input text and uses them as
intermediate conditions to predict speech tokens in a CoT style. Second, RALL-E
utilizes the predicted duration prompt to guide the computing of self-attention
weights in Transformer to enforce the model to focus on the corresponding
phonemes and prosody features when predicting speech tokens. Results of
comprehensive objective and subjective evaluations demonstrate that, compared
to a powerful baseline method VALL-E, RALL-E significantly improves the WER of
zero-shot TTS from $5.6\%$ (without reranking) and $1.7\%$ (with reranking) to
$2.5\%$ and $1.0\%$, respectively. Furthermore, we demonstrate that RALL-E
correctly synthesizes sentences that are hard for VALL-E and reduces the error
rate from $68\%$ to $4\%$. |
---|---|
DOI: | 10.48550/arxiv.2404.03204 |