VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion
One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representation...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One-shot voice conversion (VC), which performs conversion across arbitrary
speakers with only a single target-speaker utterance for reference, can be
effectively achieved by speech representation disentanglement. Existing work
generally ignores the correlation between different speech representations
during training, which causes leakage of content information into the speaker
representation and thus degrades VC performance. To alleviate this issue, we
employ vector quantization (VQ) for content encoding and introduce mutual
information (MI) as the correlation metric during training, to achieve proper
disentanglement of content, speaker and pitch representations, by reducing
their inter-dependencies in an unsupervised manner. Experimental results
reflect the superiority of the proposed method in learning effective
disentangled speech representations for retaining source linguistic content and
intonation variations, while capturing target speaker characteristics. In doing
so, the proposed approach achieves higher speech naturalness and speaker
similarity than current state-of-the-art one-shot VC systems. Our code,
pre-trained models and demo are available at
https://github.com/Wendison/VQMIVC. |
---|---|
DOI: | 10.48550/arxiv.2106.10132 |