Improved parallel WaveGAN vocoder with perceptually weighted spectrogram loss
This paper proposes a spectral-domain perceptual weighting technique for Parallel WaveGAN-based text-to-speech (TTS) systems. The recently proposed Parallel WaveGAN vocoder successfully generates waveform sequences using a fast non-autoregressive WaveNet model. By employing multi-resolution short-ti...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes a spectral-domain perceptual weighting technique for
Parallel WaveGAN-based text-to-speech (TTS) systems. The recently proposed
Parallel WaveGAN vocoder successfully generates waveform sequences using a fast
non-autoregressive WaveNet model. By employing multi-resolution short-time
Fourier transform (MR-STFT) criteria with a generative adversarial network, the
light-weight convolutional networks can be effectively trained without any
distillation process. To further improve the vocoding performance, we propose
the application of frequency-dependent weighting to the MR-STFT loss function.
The proposed method penalizes perceptually-sensitive errors in the frequency
domain; thus, the model is optimized toward reducing auditory noise in the
synthesized speech. Subjective listening test results demonstrate that our
proposed method achieves 4.21 and 4.26 TTS mean opinion scores for female and
male Korean speakers, respectively. |
---|---|
DOI: | 10.48550/arxiv.2101.07412 |