One-shot Voice Conversion For Style Transfer Based On Speaker Adaptation
One-shot style transfer is a challenging task, since training on one utterance makes model extremely easy to over-fit to training data and causes low speaker similarity and lack of expressiveness. In this paper, we build on the recognition-synthesis framework and propose a one-shot voice conversion...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One-shot style transfer is a challenging task, since training on one
utterance makes model extremely easy to over-fit to training data and causes
low speaker similarity and lack of expressiveness. In this paper, we build on
the recognition-synthesis framework and propose a one-shot voice conversion
approach for style transfer based on speaker adaptation. First, a speaker
normalization module is adopted to remove speaker-related information in
bottleneck features extracted by ASR. Second, we adopt weight regularization in
the adaptation process to prevent over-fitting caused by using only one
utterance from target speaker as training data. Finally, to comprehensively
decouple the speech factors, i.e., content, speaker, style, and transfer source
style to the target, a prosody module is used to extract prosody
representation. Experiments show that our approach is superior to the
state-of-the-art one-shot VC systems in terms of style and speaker similarity;
additionally, our approach also maintains good speech quality. |
---|---|
DOI: | 10.48550/arxiv.2111.12277 |