Enhancing GAN-Based Vocoders with Contrastive Learning Under Data-limited Condition
Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In t...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vocoder models have recently achieved substantial progress in generating
authentic audio comparable to human quality while significantly reducing memory
requirement and inference time. However, these data-hungry generative models
require large-scale audio data for learning good representations. In this
paper, we apply contrastive learning methods in training the vocoder to improve
the perceptual quality of the vocoder without modifying its architecture or
adding more data. We design an auxiliary task with mel-spectrogram contrastive
learning to enhance the utterance-level quality of the vocoder model under
data-limited conditions. We also extend the task to include waveforms to
improve the multi-modality comprehension of the model and address the
discriminator overfitting problem. We optimize the additional task
simultaneously with GAN training objectives. Our results show that the tasks
improve model performance substantially in data-limited settings. |
---|---|
DOI: | 10.48550/arxiv.2309.09088 |