Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs Using Dual Domain Adversarial Network & Virtual Domain Pairing
Primary goal of an emotional voice conversion (EVC) system is to convert the emotion of a given speech signal from one style to another style without modifying the linguistic content of the signal. Most of the state-of-the-art approaches convert emotions for seen speaker-emotion combinations only. I...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Primary goal of an emotional voice conversion (EVC) system is to convert the
emotion of a given speech signal from one style to another style without
modifying the linguistic content of the signal. Most of the state-of-the-art
approaches convert emotions for seen speaker-emotion combinations only. In this
paper, we tackle the problem of converting the emotion of speakers whose only
neutral data are present during the time of training and testing (i.e., unseen
speaker-emotion combinations). To this end, we extend a recently proposed
StartGANv2-VC architecture by utilizing dual encoders for learning the speaker
and emotion style embeddings separately along with dual domain source
classifiers. For achieving the conversion to unseen speaker-emotion
combinations, we propose a Virtual Domain Pairing (VDP) training strategy,
which virtually incorporates the speaker-emotion pairs that are not present in
the real data without compromising the min-max game of a discriminator and
generator in adversarial training. We evaluate the proposed method using a
Hindi emotional database. |
---|---|
DOI: | 10.48550/arxiv.2302.10536 |