Robust Disentangled Variational Speech Representation Learning for Zero-shot Voice Conversion
Traditional studies on voice conversion (VC) have made progress with parallel training data and known speakers. Good voice conversion quality is obtained by exploring better alignment modules or expressive mapping functions. In this study, we investigate zero-shot VC from a novel perspective of self...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Traditional studies on voice conversion (VC) have made progress with parallel
training data and known speakers. Good voice conversion quality is obtained by
exploring better alignment modules or expressive mapping functions. In this
study, we investigate zero-shot VC from a novel perspective of self-supervised
disentangled speech representation learning. Specifically, we achieve the
disentanglement by balancing the information flow between global speaker
representation and time-varying content representation in a sequential
variational autoencoder (VAE). A zero-shot voice conversion is performed by
feeding an arbitrary speaker embedding and content embeddings to the VAE
decoder. Besides that, an on-the-fly data augmentation training strategy is
applied to make the learned representation noise invariant. On TIMIT and VCTK
datasets, we achieve state-of-the-art performance on both objective evaluation,
i.e., speaker verification (SV) on speaker embedding and content embedding, and
subjective evaluation, i.e., voice naturalness and similarity, and remains to
be robust even with noisy source/target utterances. |
---|---|
DOI: | 10.48550/arxiv.2203.16705 |